在本文中,我们提出了一个新颖的对象级映射系统,该系统可以同时在动态场景中分段,跟踪和重建对象。它可以通过对深度输入的重建和类别级别的重建来进一步预测并完成其完整的几何形状,其目的是完成对象几何形状会导致更好的对象重建和跟踪准确性。对于每个传入的RGB-D帧,我们执行实例分割以检测对象并在检测和现有对象图之间构建数据关联。将为每个无与伦比的检测创建一个新的对象映射。对于每个匹配的对象,我们使用几何残差和差分渲染残留物共同优化其姿势和潜在的几何表示形式,并完成其形状之前和完成的几何形状。与使用传统的体积映射或学习形状的先验方法相比,我们的方法显示出更好的跟踪和重建性能。我们通过定量和定性测试合成和现实世界序列来评估其有效性。
translated by 谷歌翻译
当许多机器人必须在狭窄的空间中一起工作时,可以通过向前时间窗口进行精确的协调计划,可以安全,高效的运动,但这通常需要对所有设备的集中控制,这很难扩展。我们演示了GBP计划,这是一种基于高斯信念传播的多机器人计划问题的新型纯粹分布技术,该技术由定义动态和碰撞约束的通用因素图制成。在模拟中,我们表明我们的方法允许极高的性能协作计划,在繁忙,复杂的场景中,机器人能够互相交叉。即使在沟通失败的情况下,它们也比替代分布式计划技术保持更短,更快,更光滑的轨迹。
translated by 谷歌翻译
通过学习网络节点的欧几里德嵌入的欧几里德嵌入,求解求解任务的常用方法,例如节点分类或链路预测,从该欧几里德嵌入可以应用常规机器学习方法。对于诸如DeadWalk和Node2VEC等无人驾驶的随机漫游方法,在嵌入向量上为丢失添加$ \ ell_2 $罚款,导致下游任务性能提高。在本文中,我们研究了这一正规化的影响,并证明,在图中的交换性假设下,它渐近地导致学习核算型惩罚的石墨朗。特别地,惩罚的确切形式取决于随机梯度下降中使用的所使用的分配方法来学习嵌入。我们还经验地说明了将节点协变量转换为$ \ ell_2 $正则化Node2vec Embeddings导致可比性,如果不是以非线性方式合并节点协变量和网络结构的方法。
translated by 谷歌翻译
使用3D神经字段的几何形状,颜色和语义的关节表示使得能够使用手持式RGB-D传感器实时地重建场景的超稀疏交互来精确密集标记。我们的ILABEL系统不需要培训数据,但可以比在大型培训的图像数据集上培训的标准方法更准确地标记场景。此外,它以“开放式”方式工作,使用用户在飞行中定义语义类。 Ilabel的潜在模型是一款从头开始培训的多层的感知者(MLP),以实时地学习联合神经场景表示。场景模型是实时更新和可视化的,允许用户对焦相互作用以实现高效标记。可以将房间或类似的场景精确标记为10+语义类别,只需几十点击即可。定量标签精度使用点击次数强烈缩放,并迅速超越标准的预培训语义分段方法。我们还展示了一个分层标签变体。
translated by 谷歌翻译
Network data are ubiquitous in modern machine learning, with tasks of interest including node classification, node clustering and link prediction. A frequent approach begins by learning an Euclidean embedding of the network, to which algorithms developed for vector-valued data are applied. For large networks, embeddings are learned using stochastic gradient methods where the sub-sampling scheme can be freely chosen. Despite the strong empirical performance of such methods, they are not well understood theoretically. Our work encapsulates representation methods using a subsampling approach, such as node2vec, into a single unifying framework. We prove, under the assumption that the graph is exchangeable, that the distribution of the learned embedding vectors asymptotically decouples. Moreover, we characterize the asymptotic distribution and provided rates of convergence, in terms of the latent parameters, which includes the choice of loss function and the embedding dimension. This provides a theoretical foundation to understand what the embedding vectors represent and how well these methods perform on downstream tasks. Notably, we observe that typically used loss functions may lead to shortcomings, such as a lack of Fisher consistency.
translated by 谷歌翻译
We argue the case for Gaussian Belief Propagation (GBP) as a strong algorithmic framework for the distributed, generic and incremental probabilistic estimation we need in Spatial AI as we aim at high performance smart robots and devices which operate within the constraints of real products. Processor hardware is changing rapidly, and GBP has the right character to take advantage of highly distributed processing and storage while estimating global quantities, as well as great flexibility. We present a detailed tutorial on GBP, relating to the standard factor graph formulation used in robotics and computer vision, and give several simulation examples with code which demonstrate its properties.
translated by 谷歌翻译
The recent increase in public and academic interest in preserving biodiversity has led to the growth of the field of conservation technology. This field involves designing and constructing tools that utilize technology to aid in the conservation of wildlife. In this article, we will use case studies to demonstrate the importance of designing conservation tools with human-wildlife interaction in mind and provide a framework for creating successful tools. These case studies include a range of complexities, from simple cat collars to machine learning and game theory methodologies. Our goal is to introduce and inform current and future researchers in the field of conservation technology and provide references for educating the next generation of conservation technologists. Conservation technology not only has the potential to benefit biodiversity but also has broader impacts on fields such as sustainability and environmental protection. By using innovative technologies to address conservation challenges, we can find more effective and efficient solutions to protect and preserve our planet's resources.
translated by 谷歌翻译
A Digital Twin (DT) is a simulation of a physical system that provides information to make decisions that add economic, social or commercial value. The behaviour of a physical system changes over time, a DT must therefore be continually updated with data from the physical systems to reflect its changing behaviour. For resource-constrained systems, updating a DT is non-trivial because of challenges such as on-board learning and the off-board data transfer. This paper presents a framework for updating data-driven DTs of resource-constrained systems geared towards system health monitoring. The proposed solution consists of: (1) an on-board system running a light-weight DT allowing the prioritisation and parsimonious transfer of data generated by the physical system; and (2) off-board robust updating of the DT and detection of anomalous behaviours. Two case studies are considered using a production gas turbine engine system to demonstrate the digital representation accuracy for real-world, time-varying physical systems.
translated by 谷歌翻译
We introduce Argoverse 2 (AV2) - a collection of three datasets for perception and forecasting research in the self-driving domain. The annotated Sensor Dataset contains 1,000 sequences of multimodal data, encompassing high-resolution imagery from seven ring cameras, and two stereo cameras in addition to lidar point clouds, and 6-DOF map-aligned pose. Sequences contain 3D cuboid annotations for 26 object categories, all of which are sufficiently-sampled to support training and evaluation of 3D perception models. The Lidar Dataset contains 20,000 sequences of unlabeled lidar point clouds and map-aligned pose. This dataset is the largest ever collection of lidar sensor data and supports self-supervised learning and the emerging task of point cloud forecasting. Finally, the Motion Forecasting Dataset contains 250,000 scenarios mined for interesting and challenging interactions between the autonomous vehicle and other actors in each local scene. Models are tasked with the prediction of future motion for "scored actors" in each scenario and are provided with track histories that capture object location, heading, velocity, and category. In all three datasets, each scenario contains its own HD Map with 3D lane and crosswalk geometry - sourced from data captured in six distinct cities. We believe these datasets will support new and existing machine learning research problems in ways that existing datasets do not. All datasets are released under the CC BY-NC-SA 4.0 license.
translated by 谷歌翻译
We present a Machine Learning (ML) study case to illustrate the challenges of clinical translation for a real-time AI-empowered echocardiography system with data of ICU patients in LMICs. Such ML case study includes data preparation, curation and labelling from 2D Ultrasound videos of 31 ICU patients in LMICs and model selection, validation and deployment of three thinner neural networks to classify apical four-chamber view. Results of the ML heuristics showed the promising implementation, validation and application of thinner networks to classify 4CV with limited datasets. We conclude this work mentioning the need for (a) datasets to improve diversity of demographics, diseases, and (b) the need of further investigations of thinner models to be run and implemented in low-cost hardware to be clinically translated in the ICU in LMICs. The code and other resources to reproduce this work are available at https://github.com/vital-ultrasound/ai-assisted-echocardiography-for-low-resource-countries.
translated by 谷歌翻译