预测行人运动对于开发在拥挤的环境中相互作用的社会意识的机器人至关重要。虽然社交互动环境的自然视觉观点是一种自然的观点,但轨迹预测中的大多数现有作品纯粹是在自上而下的轨迹空间中进行的。为了支持第一人称视图轨迹预测研究,我们提出了T2FPV,这是一种构建高保真的第一人称视图数据集的方法,给定真实的,自上而下的轨迹数据集;我们在ETH/UCY行人数据集上展示了我们的方法,以生成所有互动行人的以自我为中心的视觉数据。我们报告说,原始的ETH/UCY数据集中使用的鸟眼视图假设,即代理可以用完美的信息观察场景中的每个人,而不会在第一人称视图中保持;在现有作品中通常使用的每个20个磁场场景中,只有一小部分的代理都可以完全看到。我们评估现有的轨迹预测方法在不同的现实感知水平下 - 与自上而下的完美信息设置相比,位移错误增加了356%。为了促进第一人称视图轨迹预测的研究,我们发布了T2FPV-ETH数据集和软件工具。
translated by 谷歌翻译
转移学习是开发性能RL代理的越来越普遍的方法。但是,尚不清楚如何定义源和目标任务之间的关系,以及这种关系如何有助于成功转移。我们提出了一种称为两个MDP或SS2的结构相似性的算法,该算法基于先前开发的双仿真指标来计算两个有限MDP的状态的状态相似性度量,并表明该量度满足距离度量的属性。然后,通过GRIDWORLD导航任务的经验结果,我们提供了证据表明,距离度量可用于改善Q学习剂的转移性能,而不是先前的实现。
translated by 谷歌翻译
大多数深度加强学习(DRL)的方法试图一次解决单一任务。因此,大多数现有的研究基准组成包括具有普通接口,但在其感知特征,目标或奖励结构中重叠的单独游戏或套房。促进培训代理人的知识转移(例如,通过多任务和元学习),需要更多的环境套件,提供具有足够共同的可配置任务,以共同研究待研究。在本文中,我们提供了Meta Arcade,该工具可以轻松定义和配置共享公共视觉效果,状态空间,动作空间,游戏组件和评分机制的自定义2D街机游戏。元拱门与现有环境不同,因为任职性共性和可配置性都优先考虑:可以从公共元素构建整组游戏,并且这些元素可通过暴露参数调节。我们包括一套24个预定义的游戏,共同说明了该框架的可能性,并讨论如何为研究应用程序配置这些游戏。我们提供了几个实验,说明了可以使用Meta Arcade如何使用,包括预定义游戏的单项任务基准,以设定的时间表更改游戏参数的示例课程的方法,以及游戏之间的转移学习探索。
translated by 谷歌翻译
The recent increase in public and academic interest in preserving biodiversity has led to the growth of the field of conservation technology. This field involves designing and constructing tools that utilize technology to aid in the conservation of wildlife. In this article, we will use case studies to demonstrate the importance of designing conservation tools with human-wildlife interaction in mind and provide a framework for creating successful tools. These case studies include a range of complexities, from simple cat collars to machine learning and game theory methodologies. Our goal is to introduce and inform current and future researchers in the field of conservation technology and provide references for educating the next generation of conservation technologists. Conservation technology not only has the potential to benefit biodiversity but also has broader impacts on fields such as sustainability and environmental protection. By using innovative technologies to address conservation challenges, we can find more effective and efficient solutions to protect and preserve our planet's resources.
translated by 谷歌翻译
We address the problem of extracting key steps from unlabeled procedural videos, motivated by the potential of Augmented Reality (AR) headsets to revolutionize job training and performance. We decompose the problem into two steps: representation learning and key steps extraction. We employ self-supervised representation learning via a training strategy that adapts off-the-shelf video features using a temporal module. Training implements self-supervised learning losses involving multiple cues such as appearance, motion and pose trajectories extracted from videos to learn generalizable representations. Our method extracts key steps via a tunable algorithm that clusters the representations extracted from procedural videos. We quantitatively evaluate our approach with key step localization and also demonstrate the effectiveness of the extracted representations on related downstream tasks like phase classification. Qualitative results demonstrate that the extracted key steps are meaningful to succinctly represent the procedural tasks.
translated by 谷歌翻译
We introduce Argoverse 2 (AV2) - a collection of three datasets for perception and forecasting research in the self-driving domain. The annotated Sensor Dataset contains 1,000 sequences of multimodal data, encompassing high-resolution imagery from seven ring cameras, and two stereo cameras in addition to lidar point clouds, and 6-DOF map-aligned pose. Sequences contain 3D cuboid annotations for 26 object categories, all of which are sufficiently-sampled to support training and evaluation of 3D perception models. The Lidar Dataset contains 20,000 sequences of unlabeled lidar point clouds and map-aligned pose. This dataset is the largest ever collection of lidar sensor data and supports self-supervised learning and the emerging task of point cloud forecasting. Finally, the Motion Forecasting Dataset contains 250,000 scenarios mined for interesting and challenging interactions between the autonomous vehicle and other actors in each local scene. Models are tasked with the prediction of future motion for "scored actors" in each scenario and are provided with track histories that capture object location, heading, velocity, and category. In all three datasets, each scenario contains its own HD Map with 3D lane and crosswalk geometry - sourced from data captured in six distinct cities. We believe these datasets will support new and existing machine learning research problems in ways that existing datasets do not. All datasets are released under the CC BY-NC-SA 4.0 license.
translated by 谷歌翻译
In training neural networks, batch normalization has many benefits, not all of them entirely understood. But it also has some drawbacks. Foremost is arguably memory consumption, as computing the batch statistics requires all instances within the batch to be processed simultaneously, whereas without batch normalization it would be possible to process them one by one while accumulating the weight gradients. Another drawback is that that distribution parameters (mean and standard deviation) are unlike all other model parameters in that they are not trained using gradient descent but require special treatment, complicating implementation. In this paper, I show a simple and straightforward way to address these issues. The idea, in short, is to add terms to the loss that, for each activation, cause the minimization of the negative log likelihood of a Gaussian distribution that is used to normalize the activation. Among other benefits, this will hopefully contribute to the democratization of AI research by means of lowering the hardware requirements for training larger models.
translated by 谷歌翻译
In this paper, we introduce neural texture learning for 6D object pose estimation from synthetic data and a few unlabelled real images. Our major contribution is a novel learning scheme which removes the drawbacks of previous works, namely the strong dependency on co-modalities or additional refinement. These have been previously necessary to provide training signals for convergence. We formulate such a scheme as two sub-optimisation problems on texture learning and pose learning. We separately learn to predict realistic texture of objects from real image collections and learn pose estimation from pixel-perfect synthetic data. Combining these two capabilities allows then to synthesise photorealistic novel views to supervise the pose estimator with accurate geometry. To alleviate pose noise and segmentation imperfection present during the texture learning phase, we propose a surfel-based adversarial training loss together with texture regularisation from synthetic data. We demonstrate that the proposed approach significantly outperforms the recent state-of-the-art methods without ground-truth pose annotations and demonstrates substantial generalisation improvements towards unseen scenes. Remarkably, our scheme improves the adopted pose estimators substantially even when initialised with much inferior performance.
translated by 谷歌翻译
Prevailing methods for assessing and comparing generative AIs incentivize responses that serve a hypothetical representative individual. Evaluating models in these terms presumes homogeneous preferences across the population and engenders selection of agglomerative AIs, which fail to represent the diverse range of interests across individuals. We propose an alternative evaluation method that instead prioritizes inclusive AIs, which provably retain the requisite knowledge not only for subsequent response customization to particular segments of the population but also for utility-maximizing decisions.
translated by 谷歌翻译
We designed and constructed an A-sized base autonomous underwater vehicle (AUV), augmented with a stack of modular and extendable hardware and software, including autonomy, navigation, control and high fidelity simulation capabilities (A-size stands for the standard sonobuoy form factor, with a maximum diameter of 124 mm). Subsequently, we extended this base vehicle with a novel tuna-inspired morphing fin payload module (referred to as the Morpheus AUV), to achieve good directional stability and exceptional maneuverability; properties that are highly desirable for rigid hull AUVs, but are presently difficult to achieve because they impose contradictory requirements. The morphing fin payload allows the base AUV to dynamically change its stability-maneuverability qualities by using morphing fins, which can be deployed, deflected and retracted, as needed. The base vehicle and Morpheus AUV were both extensively field tested in-water in the Charles river, Massachusetts, USA; by conducting hundreds of hours of operations over a period of two years. The maneuvering capability of the Morpheus AUV was evaluated with and without the use of morphing fins to quantify the performance improvement. The Morpheus AUV was able to showcase an exceptional turning rate of around 25-35 deg/s. A maximum turn rate improvement of around 35% - 50% was gained through the use of morphing fins.
translated by 谷歌翻译