在我们最近在加纳被动饮食监测的饮食评估现场研究中,我们收集了超过25万件野外图像。该数据集是一种持续的努力,旨在通过被动监控摄像头技术在低收入和中等收入国家中准确测量单个食物和营养摄入量。目前的数据集涉及加纳农村地区和城市地区的20个家庭(74个受试者),研究中使用了两种不同类型的可穿戴摄像机。一旦开始,可穿戴摄像机会不断捕获受试者的活动,该活动会产生大量的数据,以便在进行分析之前清洁和注释。为了简化数据后处理和注释任务,我们提出了一个新颖的自学学习框架,以将大量以自我为中心的图像聚集到单独的事件中。每个事件都由一系列时间连续和上下文相似的图像组成。通过将图像聚集到单独的事件中,注释者和营养师可以更有效地检查和分析数据,并促进随后的饮食评估过程。在带有地面真实标签的固定测试套装上验证,拟议的框架在聚集质量和分类准确性方面优于基准。
translated by 谷歌翻译
基于观察到的图,对在关系结构数据上应用机器学习技术的兴趣增加了。通常,该图并不能完全代表节点之间的真实关系。在这些设置中,构建以观测图为条件的生成模型可以考虑图形不确定性。各种现有技术要么依赖于限制性假设,无法在样品中保留拓扑特性,要么在较大的图表中昂贵。在这项工作中,我们介绍了用于通过图形构建分布的节点复制模型。随机图的采样是通过替换每个节点的邻居的邻居来进行采样的。采样图保留图形结构的关键特征,而无需明确定位它们。此外,该模型的采样非常简单,并与节点线性缩放。我们在三个任务中显示了复制模型的有用性。首先,在节点分类中,基于节点复制的贝叶斯公式在稀疏数据设置中实现了更高的精度。其次,我们采用建议的模型来减轻对抗攻击对图形拓扑的影响。最后,将模型纳入推荐系统设置,改善了对最新方法的回忆。
translated by 谷歌翻译
自动食品识别是迈向被动饮食监测的第一步。在本文中,我们通过开采歧视性食品地区解决了食品识别问题。从对抗性擦除中汲取灵感,该策略逐渐发现判别对象区域以弱监督语义细分,我们提出了一种新型的网络体系结构,其中主要网络保持了对输入图像进行分类的基本准确性,辅助网络对抗性地矿山挖掘了歧视食品区域,歧视食物区域,歧视食物区域,歧视食物区域,歧视图像。区域网络对所得的开采区域进行了分类。然后将全局(原始输入图像)和本地(矿区)表示为最终预测。拟议的架构表示为par-net,是端到端的训练,并以在线方式突出显示歧视区域。此外,我们推出了一个名为Sushi-50的新的细粒食品数据集,该数据集由50种不同的寿司类别组成。已经进行了广泛的实验来评估所提出的方法。在选择的三个食物数据集(Food-101,Vireo-172和Sushi-50)上,我们的方法始终如一地执行并取得了最先进的结果(TOP-1测试准确性$ 90.4 \%\%$,$ 90.2 \%\%$ $ ,分别为$ 92.0 \%$)与其他现有方法相比。数据集和代码可在https://github.com/jianing-qiu/parnet上找到
translated by 谷歌翻译
过去几年的技术创新的巨大浪潮,标志着AI技术的进展,是深刻的重塑行业和社会。然而,在路上,一个关键的挑战等待着我们,即我们满足快速增长的情景的能力的能力受到收购培训数据的成本的严重限制。由于主流学习范式的局限性,这一困难的局面是基于主流学习范式的局限性:我们需要根据大量注释的数据以及通常从头来训练每个新场景的新模型。在解决这一基本问题时,我们超越并开发一个名为实习生的新学习范式。通过在多个阶段的来自多个来源的监控信号学习,培训的模型将产生强大的相互性。我们在26个众所周知的数据集中评估我们的模型,该数据集涵盖计算机视觉中的四类任务。在大多数情况下,我们的模型仅适用于目标域中的培训数据的10%,始终以完整的数据培训的对应物,通常由显着的边距。这是一个重要前景的重要一步,其中具有一般视觉能力的这种模型可以大大降低对数据的依赖,从而加速通过AI技术的采用。此外,围绕我们的新范式旋转,我们还介绍了一个新的数据系统,新的架构和新的基准,以及一起形成一般愿景生态系统,以开放和包容性的方式支持其未来的发展。
translated by 谷歌翻译
在本文中,我们解决了预测拥挤空间中的Egentric相机佩戴者(自我)的轨迹的问题。从现实世界中走向周围的不同相机佩戴者数据的数据学到的轨迹预测能力可以转移,以协助导航中的人们在导航中的人们障碍,并在移动机器人中灌输人类导航行为,从而实现更好的人机互动。为此,构建了一个新的Egocentric人类轨迹预测数据集,其中包含在佩戴相机的拥挤空间中导航的人们的真实轨迹,以及提取丰富的上下文数据。我们提取并利用三种不同的方式来预测摄像机佩戴者的轨迹,即他/她过去的轨迹,附近人的过去的轨迹以及场景语义或场景的深度等环境。基于变压器的编码器解码器神经网络模型,与熔化多种方式的新型级联跨关注机构集成,已经设计成预测相机佩戴者的未来轨迹。已经进行了广泛的实验,结果表明,我们的模型在Emocentric人类轨迹预测中优于最先进的方法。
translated by 谷歌翻译
Accurate prediction of future person location and movement trajectory from an egocentric wearable camera can benefit a wide range of applications, such as assisting visually impaired people in navigation, and the development of mobility assistance for people with disability. In this work, a new egocentric dataset was constructed using a wearable camera, with 8,250 short clips of a targeted person either walking 1) toward, 2) away, or 3) across the camera wearer in indoor environments, or 4) staying still in the scene, and 13,817 person bounding boxes were manually labelled. Apart from the bounding boxes, the dataset also contains the estimated pose of the targeted person as well as the IMU signal of the wearable camera at each time point. An LSTM-based encoder-decoder framework was designed to predict the future location and movement trajectory of the targeted person in this egocentric setting. Extensive experiments have been conducted on the new dataset, and have shown that the proposed method is able to reliably and better predict future person location and trajectory in egocentric videos captured by the wearable camera compared to three baselines.
translated by 谷歌翻译
Event cameras, offering high temporal resolutions and high dynamic ranges, have brought a new perspective to address common challenges (e.g., motion blur and low light) in monocular depth estimation. However, how to effectively exploit the sparse spatial information and rich temporal cues from asynchronous events remains a challenging endeavor. To this end, we propose a novel event-based monocular depth estimator with recurrent transformers, namely EReFormer, which is the first pure transformer with a recursive mechanism to process continuous event streams. Technically, for spatial modeling, a novel transformer-based encoder-decoder with a spatial transformer fusion module is presented, having better global context information modeling capabilities than CNN-based methods. For temporal modeling, we design a gate recurrent vision transformer unit that introduces a recursive mechanism into transformers, improving temporal modeling capabilities while alleviating the expensive GPU memory cost. The experimental results show that our EReFormer outperforms state-of-the-art methods by a margin on both synthetic and real-world datasets. We hope that our work will attract further research to develop stunning transformers in the event-based vision community. Our open-source code can be found in the supplemental material.
translated by 谷歌翻译
Recently, Bird's-Eye-View (BEV) representation has gained increasing attention in multi-view 3D object detection, which has demonstrated promising applications in autonomous driving. Although multi-view camera systems can be deployed at low cost, the lack of depth information makes current approaches adopt large models for good performance. Therefore, it is essential to improve the efficiency of BEV 3D object detection. Knowledge Distillation (KD) is one of the most practical techniques to train efficient yet accurate models. However, BEV KD is still under-explored to the best of our knowledge. Different from image classification tasks, BEV 3D object detection approaches are more complicated and consist of several components. In this paper, we propose a unified framework named BEV-LGKD to transfer the knowledge in the teacher-student manner. However, directly applying the teacher-student paradigm to BEV features fails to achieve satisfying results due to heavy background information in RGB cameras. To solve this problem, we propose to leverage the localization advantage of LiDAR points. Specifically, we transform the LiDAR points to BEV space and generate the foreground mask and view-dependent mask for the teacher-student paradigm. It is to be noted that our method only uses LiDAR points to guide the KD between RGB models. As the quality of depth estimation is crucial for BEV perception, we further introduce depth distillation to our framework. Our unified framework is simple yet effective and achieves a significant performance boost. Code will be released.
translated by 谷歌翻译
The advances in deep learning (DL) techniques have the potential to deliver transformative technological breakthroughs to numerous complex tasks in modern power systems that suffer from increasing uncertainty and nonlinearity. However, the vulnerability of DL has yet to be thoroughly explored in power system tasks under various physical constraints. This work, for the first time, proposes a novel physics-constrained backdoor poisoning attack, which embeds the undetectable attack signal into the learned model and only performs the attack when it encounters the corresponding signal. The paper illustrates the proposed attack on the real-time fault line localization application. Furthermore, the simulation results on the 68-bus power system demonstrate that DL-based fault line localization methods are not robust to our proposed attack, indicating that backdoor poisoning attacks pose real threats to DL implementations in power systems. The proposed attack pipeline can be easily generalized to other power system tasks.
translated by 谷歌翻译
深度估计对于各种重要的现实世界应用至关重要,例如自动驾驶。但是,在高速场景中,它遭受了严重的性能退化,因为传统相机只能捕获模糊的图像。为了解决这个问题,Spike摄像头旨在以高框架速率捕获像素的亮度强度。但是,使用传统的单眼或立体声深度估计算法,使用尖峰摄像机的深度估计仍然非常具有挑战性,这些算法基于光度一致性。在本文中,我们提出了一种新型的不确定性引导深度融合(UGDF)框架,以融合Spike摄像机的单眼和立体声深度估计网络的预测。我们的框架是由于立体声尖峰深度估计在近距离取得更好的结果,而单眼尖峰深度估计获得了更好的结果。因此,我们引入了具有联合培训策略的双任务深度估计结构,并估算了分布式不确定性以融合单眼和立体声结果。为了证明尖峰深度估计比传统的摄像头深度估计的优势,我们为一个名为CitySpike20k的尖峰深度数据集,其中包含20k配对的样品,以进行尖峰深度估计。 UGDF在CitySpike20k上取得了最新的结果,超过了所有单眼或立体声尖峰深度估计基线。我们进行了广泛的实验,以评估我们方法对CitySpike20k的有效性和概括。据我们所知,我们的框架是第一个用于尖峰摄像头深度估算的双任务融合框架。代码和数据集将发布。
translated by 谷歌翻译