最近,神经场景表征在视觉上为3D场景提供了令人印象深刻的结果,但是,他们的研究和进步主要仅限于计算机图形或计算机视觉中的虚拟模型的可视化,而无需明确考虑传感器和构成不确定性的情况。但是,在机器人技术应用程序中使用这种新颖的场景表示形式,需要考虑神经图中这种不确定性。因此,本文的目的是提出一种新的方法,用于使用不确定的培训数据训练{\ em概率的神经场景表示},这可以使这些表示形式纳入机器人技术应用中。使用相机或深度传感器获取图像包含固有的不确定性,此外,用于学习3D模型的相机姿势也不完美。如果这些测量值用于训练而无需考虑其不确定性,则结果模型是非最佳的,并且所得场景表示可能包含诸如Blur和Un-Cheven几何形状之类的伪影。在这项工作中,通过以概率方式专注于不确定信息的培训来研究与学习过程的不确定性整合问题。所提出的方法涉及以不确定性项的明确增加训练可能性,以使网络的学习概率分布相对于培训不确定性最小化。可以证明,除了更精确和一致的几何形状外,这还导致更准确的图像渲染质量。对合成数据集和真实数据集进行了验证,表明所提出的方法的表现优于最先进的方法。结果表明,即使训练数据受到限制,该提出的方法也能够呈现新颖的高质量视图。
translated by 谷歌翻译
本文解决了深度和自我运动的端到端自我监督预测的问题。给定一系列原始图像,其目的是通过自我监督的光度损失预测几何和自我运动。该体系结构是使用卷积和变压器模块设计的。这利用了两个模块的好处:CNN的电感偏置和变压器的多头注意力,从而实现了丰富的时空表示,从而实现了准确的深度预测。先前的工作尝试使用多模式输入/输出使用有监督的地面真实数据来解决此问题,这是不实际的,因为需要大量注释的数据集。另外,本文仅使用自我监督的原始图像作为输入来预测深度​​和自我运动。该方法在KITTI数据集基准上表现出色,几个性能标准甚至可以与先前的非预测自我监督的单眼深度推理方法相提并论。
translated by 谷歌翻译
神经场景表示,例如神经辐射场(NERF),基于训练多层感知器(MLP),使用一组具有已知姿势的彩色图像。现在,越来越多的设备产生RGB-D(颜色 +深度)信息,这对于各种任务非常重要。因此,本文的目的是通过将深度信息与颜色图像结合在一起,研究这些有希望的隐式表示可以进行哪些改进。特别是,最近建议的MIP-NERF方法使用圆锥形的圆丝而不是射线进行音量渲染,它使人们可以考虑具有距离距离摄像头中心距离的像素的不同区域。所提出的方法还模拟了深度不确定性。这允许解决基于NERF的方法的主要局限性,包括提高几何形状的准确性,减少伪像,更快的训练时间和缩短预测时间。实验是在众所周知的基准场景上进行的,并且比较在场景几何形状和光度重建中的准确性提高,同时将训练时间减少了3-5次。
translated by 谷歌翻译
Learning a 3D representation of a scene has been a challenging problem for decades in computer vision. Recent advances in implicit neural representation from images using neural radiance fields(NeRF) have shown promising results. Some of the limitations of previous NeRF based methods include longer training time, and inaccurate underlying geometry. The proposed method takes advantage of RGB-D data to reduce training time by leveraging depth sensing to improve local sampling. This paper proposes a depth-guided local sampling strategy and a smaller neural network architecture to achieve faster training time without compromising quality.
translated by 谷歌翻译
本文提出了一个自我监督的单眼图像对深度预测框架,该框架经过端到端光度损失的训练,不仅可以处理6-DOF摄像机运动,还可以处理6-DOF移动对象实例。自学是通过使用深度和场景运动(包括对象实例)在视频序列上扭曲图像来执行的。提出方法的一种新颖性是使用变压器网络的多头注意力,该注意与随时间匹配移动对象并建模其相互作用和动力学。这可以为每个对象实例实现准确稳健的姿势估计。大多数图像到深度的谓词框架都可以假设刚性场景,从而在很大程度上降低了它们相对于动态对象的性能。只有少数SOTA论文说明了动态对象。所提出的方法显示出在标准基准上胜过这些方法,而动态运动对这些基准测试的影响也暴露出来。此外,所提出的图像到深度预测框架也被证明与SOTA视频对深度预测框架具有竞争力。
translated by 谷歌翻译
本文提出了有条件生成对抗性网络(CGANS)的两个重要贡献,以改善利用此架构的各种应用。第一个主要贡献是对CGANS的分析表明它们没有明确条件。特别地,将显示鉴别者和随后的Cgan不会自动学习输入之间的条件。第二种贡献是一种新方法,称为逆时针,该方法通过新颖的逆损失明确地模拟了对抗架构的两部分的条件,涉及培训鉴别者学习无条件(不利)示例。这导致了用于GANS(逆学习)的新型数据增强方法,其允许使用不利示例将发电机的搜索空间限制为条件输出。通过提出概率分布分析,进行广泛的实验以评估判别符的条件。与不同应用的CGAN架构的比较显示了众所周知的数据集的性能的显着改进,包括使用不同度量的不同度量的语义图像合成,图像分割,单眼深度预测和“单个标签” - 图像(FID) ),平均联盟(Miou)交叉口,根均线误差日志(RMSE日志)和统计上不同的箱数(NDB)。
translated by 谷歌翻译
The recent increase in public and academic interest in preserving biodiversity has led to the growth of the field of conservation technology. This field involves designing and constructing tools that utilize technology to aid in the conservation of wildlife. In this article, we will use case studies to demonstrate the importance of designing conservation tools with human-wildlife interaction in mind and provide a framework for creating successful tools. These case studies include a range of complexities, from simple cat collars to machine learning and game theory methodologies. Our goal is to introduce and inform current and future researchers in the field of conservation technology and provide references for educating the next generation of conservation technologists. Conservation technology not only has the potential to benefit biodiversity but also has broader impacts on fields such as sustainability and environmental protection. By using innovative technologies to address conservation challenges, we can find more effective and efficient solutions to protect and preserve our planet's resources.
translated by 谷歌翻译
A Digital Twin (DT) is a simulation of a physical system that provides information to make decisions that add economic, social or commercial value. The behaviour of a physical system changes over time, a DT must therefore be continually updated with data from the physical systems to reflect its changing behaviour. For resource-constrained systems, updating a DT is non-trivial because of challenges such as on-board learning and the off-board data transfer. This paper presents a framework for updating data-driven DTs of resource-constrained systems geared towards system health monitoring. The proposed solution consists of: (1) an on-board system running a light-weight DT allowing the prioritisation and parsimonious transfer of data generated by the physical system; and (2) off-board robust updating of the DT and detection of anomalous behaviours. Two case studies are considered using a production gas turbine engine system to demonstrate the digital representation accuracy for real-world, time-varying physical systems.
translated by 谷歌翻译
We introduce Argoverse 2 (AV2) - a collection of three datasets for perception and forecasting research in the self-driving domain. The annotated Sensor Dataset contains 1,000 sequences of multimodal data, encompassing high-resolution imagery from seven ring cameras, and two stereo cameras in addition to lidar point clouds, and 6-DOF map-aligned pose. Sequences contain 3D cuboid annotations for 26 object categories, all of which are sufficiently-sampled to support training and evaluation of 3D perception models. The Lidar Dataset contains 20,000 sequences of unlabeled lidar point clouds and map-aligned pose. This dataset is the largest ever collection of lidar sensor data and supports self-supervised learning and the emerging task of point cloud forecasting. Finally, the Motion Forecasting Dataset contains 250,000 scenarios mined for interesting and challenging interactions between the autonomous vehicle and other actors in each local scene. Models are tasked with the prediction of future motion for "scored actors" in each scenario and are provided with track histories that capture object location, heading, velocity, and category. In all three datasets, each scenario contains its own HD Map with 3D lane and crosswalk geometry - sourced from data captured in six distinct cities. We believe these datasets will support new and existing machine learning research problems in ways that existing datasets do not. All datasets are released under the CC BY-NC-SA 4.0 license.
translated by 谷歌翻译
We present a Machine Learning (ML) study case to illustrate the challenges of clinical translation for a real-time AI-empowered echocardiography system with data of ICU patients in LMICs. Such ML case study includes data preparation, curation and labelling from 2D Ultrasound videos of 31 ICU patients in LMICs and model selection, validation and deployment of three thinner neural networks to classify apical four-chamber view. Results of the ML heuristics showed the promising implementation, validation and application of thinner networks to classify 4CV with limited datasets. We conclude this work mentioning the need for (a) datasets to improve diversity of demographics, diseases, and (b) the need of further investigations of thinner models to be run and implemented in low-cost hardware to be clinically translated in the ICU in LMICs. The code and other resources to reproduce this work are available at https://github.com/vital-ultrasound/ai-assisted-echocardiography-for-low-resource-countries.
translated by 谷歌翻译