我们介绍时间有序的多体相互作用,以描述表现出时间的复杂系统以及多体依赖性。首先,我们展示了多元马尔可夫链的动态如何在时间有序的多体相互作用的集合中分解。然后,我们提出了一种算法来提取来自数据的组合交互和测量来表征交互集合的复杂性。最后,我们通过实验验证我们算法对统计错误的稳健性及其在获取简单交互集合时的效率。
translated by 谷歌翻译
多年来,运动规划,映射和人类轨迹预测的单独领域显着提出。然而,在提供能够使移动操纵器能够执行全身运动并考虑移动障碍物的预测运动时,文献在提供实际框架方面仍然稀疏。基于以前的优化的运动计划方法,使用距离字段遭受更新环境表示所需的高计算成本。我们证明,与从头划痕计算距离场相比,GPU加速预测的复合距离场显着降低计算时间。我们将该技术与完整的运动规划和感知框架集成,其占据动态环境中的人类的预测运动,从而实现了包含预测动作的反应性和先发制人的运动规划。为实现这一目标,我们提出并实施了一种新颖的人类轨迹预测方法,该方法结合了基于轨迹优化的运动规划的意图识别。我们在现实世界丰田人类支持机器人(HSR)上验证了我们的由Onboard Camera的现场RGB-D传感器数据验证了我们的结果框架。除了在公开的数据集提供分析外,我们还释放了牛津室内人类运动(牛津-IHM)数据集,并在人类轨迹预测中展示了最先进的性能。牛津-IHM数据集是一个人类轨迹预测数据集,人们在室内环境中的兴趣区域之间行走。静态和机器人安装的RGB-D相机都观察了用运动捕获系统跟踪的人员。
translated by 谷歌翻译
诸如操纵器之类的铰接机器人必须在不确定和动态的环境中运行,例如,相互作用(例如与人类同事)是必要的。在这种情况下,必须快速适应操作空间限制的意外变化的能力至关重要。在操纵器的配置空间中的某些点(称为奇异点),机器人失去了一个或多个自由度(DOF),并且无法在特定的操作空间方向上移动。无法在操作空间中朝任意方向移动会损害适应性和安全性。我们引入了一个几何感知奇异性索引,该索引在对称正定定义矩阵上使用Riemannian度量定义,以提供与奇异构型的接近度的度量。我们证明我们的索引避免了其他共同指数固有的某些故障模式和困难。此外,我们表明该索引可以轻松区分,使其与用于操作空间控制的局部优化方法兼容。我们的实验结果表明,对于遵循任务的到达和路径,基于我们的索引优化优于一种常见的可操作性最大化技术,并确保奇异性运动动作。
translated by 谷歌翻译
In this paper we explore the task of modeling (semi) structured object sequences; in particular we focus our attention on the problem of developing a structure-aware input representation for such sequences. In such sequences, we assume that each structured object is represented by a set of key-value pairs which encode the attributes of the structured object. Given a universe of keys, a sequence of structured objects can then be viewed as an evolution of the values for each key, over time. We encode and construct a sequential representation using the values for a particular key (Temporal Value Modeling - TVM) and then self-attend over the set of key-conditioned value sequences to a create a representation of the structured object sequence (Key Aggregation - KA). We pre-train and fine-tune the two components independently and present an innovative training schedule that interleaves the training of both modules with shared attention heads. We find that this iterative two part-training results in better performance than a unified network with hierarchical encoding as well as over, other methods that use a {\em record-view} representation of the sequence \cite{de2021transformers4rec} or a simple {\em flattened} representation of the sequence. We conduct experiments using real-world data to demonstrate the advantage of interleaving TVM-KA on multiple tasks and detailed ablation studies motivating our modeling choices. We find that our approach performs better than flattening sequence objects and also allows us to operate on significantly larger sequences than existing methods.
translated by 谷歌翻译
Optical coherence tomography (OCT) captures cross-sectional data and is used for the screening, monitoring, and treatment planning of retinal diseases. Technological developments to increase the speed of acquisition often results in systems with a narrower spectral bandwidth, and hence a lower axial resolution. Traditionally, image-processing-based techniques have been utilized to reconstruct subsampled OCT data and more recently, deep-learning-based methods have been explored. In this study, we simulate reduced axial scan (A-scan) resolution by Gaussian windowing in the spectral domain and investigate the use of a learning-based approach for image feature reconstruction. In anticipation of the reduced resolution that accompanies wide-field OCT systems, we build upon super-resolution techniques to explore methods to better aid clinicians in their decision-making to improve patient outcomes, by reconstructing lost features using a pixel-to-pixel approach with an altered super-resolution generative adversarial network (SRGAN) architecture.
translated by 谷歌翻译
Real-life tools for decision-making in many critical domains are based on ranking results. With the increasing awareness of algorithmic fairness, recent works have presented measures for fairness in ranking. Many of those definitions consider the representation of different ``protected groups'', in the top-$k$ ranked items, for any reasonable $k$. Given the protected groups, confirming algorithmic fairness is a simple task. However, the groups' definitions may be unknown in advance. In this paper, we study the problem of detecting groups with biased representation in the top-$k$ ranked items, eliminating the need to pre-define protected groups. The number of such groups possible can be exponential, making the problem hard. We propose efficient search algorithms for two different fairness measures: global representation bounds, and proportional representation. Then we propose a method to explain the bias in the representations of groups utilizing the notion of Shapley values. We conclude with an experimental study, showing the scalability of our approach and demonstrating the usefulness of the proposed algorithms.
translated by 谷歌翻译
The previous fine-grained datasets mainly focus on classification and are often captured in a controlled setup, with the camera focusing on the objects. We introduce the first Fine-Grained Vehicle Detection (FGVD) dataset in the wild, captured from a moving camera mounted on a car. It contains 5502 scene images with 210 unique fine-grained labels of multiple vehicle types organized in a three-level hierarchy. While previous classification datasets also include makes for different kinds of cars, the FGVD dataset introduces new class labels for categorizing two-wheelers, autorickshaws, and trucks. The FGVD dataset is challenging as it has vehicles in complex traffic scenarios with intra-class and inter-class variations in types, scale, pose, occlusion, and lighting conditions. The current object detectors like yolov5 and faster RCNN perform poorly on our dataset due to a lack of hierarchical modeling. Along with providing baseline results for existing object detectors on FGVD Dataset, we also present the results of a combination of an existing detector and the recent Hierarchical Residual Network (HRN) classifier for the FGVD task. Finally, we show that FGVD vehicle images are the most challenging to classify among the fine-grained datasets.
translated by 谷歌翻译
Three main points: 1. Data Science (DS) will be increasingly important to heliophysics; 2. Methods of heliophysics science discovery will continually evolve, requiring the use of learning technologies [e.g., machine learning (ML)] that are applied rigorously and that are capable of supporting discovery; and 3. To grow with the pace of data, technology, and workforce changes, heliophysics requires a new approach to the representation of knowledge.
translated by 谷歌翻译
In the Earth's magnetosphere, there are fewer than a dozen dedicated probes beyond low-Earth orbit making in-situ observations at any given time. As a result, we poorly understand its global structure and evolution, the mechanisms of its main activity processes, magnetic storms, and substorms. New Artificial Intelligence (AI) methods, including machine learning, data mining, and data assimilation, as well as new AI-enabled missions will need to be developed to meet this Sparse Data challenge.
translated by 谷歌翻译
Dataset scaling, also known as normalization, is an essential preprocessing step in a machine learning pipeline. It is aimed at adjusting attributes scales in a way that they all vary within the same range. This transformation is known to improve the performance of classification models, but there are several scaling techniques to choose from, and this choice is not generally done carefully. In this paper, we execute a broad experiment comparing the impact of 5 scaling techniques on the performances of 20 classification algorithms among monolithic and ensemble models, applying them to 82 publicly available datasets with varying imbalance ratios. Results show that the choice of scaling technique matters for classification performance, and the performance difference between the best and the worst scaling technique is relevant and statistically significant in most cases. They also indicate that choosing an inadequate technique can be more detrimental to classification performance than not scaling the data at all. We also show how the performance variation of an ensemble model, considering different scaling techniques, tends to be dictated by that of its base model. Finally, we discuss the relationship between a model's sensitivity to the choice of scaling technique and its performance and provide insights into its applicability on different model deployment scenarios. Full results and source code for the experiments in this paper are available in a GitHub repository.\footnote{https://github.com/amorimlb/scaling\_matters}
translated by 谷歌翻译