随着LIDAR的感知范围的增加,基于激光雷达的3D对象检测成为自主驾驶的长期感知任务中的主要任务。主流3D对象检测器通常在网络骨干和预测头上构建密集的特征图。但是,密集特征图上的计算和空间成本与感知范围是二次的,这几乎无法扩展到远程设置。为了启用有效的基于远程激光痛的对象检测,我们构建了一个完全稀疏的3D对象检测器(FSD)。 FSD的计算和空间成本大致是线性的,与感知范围无关。 FSD建立在一般的稀疏体素编码器和新颖的稀疏实例识别(SIR)模块上。爵士第一将点分组为实例,然后应用实例的特征提取和预测。这样,爵士解决了中心功能缺失的问题,这阻碍了所有基于中心或基于锚的探测器的完全稀疏体系结构的设计。此外,SIR通过将点分组为实例,避免了以前基于点的方法中耗时的邻居查询。我们在大规模Waymo开放数据集上进行了广泛的实验,以揭示FSD的工作机制,并报告了最新的性能。为了证明FSD在远程检测中的优势,我们还对Argoverse 2数据集进行了实验,该数据集的感知范围(2亿美元)比Waymo Open DataSet(7500万美元)更大。在如此庞大的感知范围内,FSD实现了最先进的性能,并且比密集对应物快2.4 $ \ times $ $。编号将在https://github.com/tusimple/sst上发布。
translated by 谷歌翻译
在基于LIDAR的自主驱动的基于LIDAR的3D对象检测中,与2D检测情况相比,对象尺寸与输入场景尺寸的比率明显较小。俯瞰此差异,许多3D探测器直接遵循2D探测器的常见做法,即使在量化点云之后,也可以将特征映射下来。在本文中,我们首先重新思考这种多级刻板印象如何影响基于激光雷达的3D对象探测器。我们的实验指出,下采样操作带来了一些优势,并导致不可避免的信息损失。要解决此问题,我们提出了单程稀疏变压器(SST),以将原始分辨率从网络的开头维护。我们的方法武装变压器,我们的方法解决了单步体系结构中的接收领域不足的问题。它还与点云的稀疏合作,自然避免昂贵的计算。最终,我们的SST在大型Waymo Open DataSet上实现了最先进的结果。值得一提的是,由于单程的特征,我们的方法可以在小物体(行人)检测上实现令人兴奋的性能(83.8级)对小物体(行人)检测。代码将在https://github.com/tusimple/sst释放
translated by 谷歌翻译
当训练数据集患有极端阶级失衡时,深度神经网络通常会表现不佳。最近的研究发现,以半监督的方式直接使用分布外数据(即开放式样本)培训将损害概括性能。在这项工作中,我们从理论上表明,从贝叶斯的角度来看,仍然可以利用分发数据来扩大少数群体。基于这种动机,我们提出了一种称为开放采样的新方法,该方法利用开放式嘈杂标签重新平衡培训数据集的班级先验。对于每个开放式实例,标签是​​从我们的预定义分布中取样的,该分布互补,与原始类先验的分布互补。我们从经验上表明,开放采样不仅可以重新平衡阶级先验,还鼓励神经网络学习可分离的表示。广泛的实验表明,我们提出的方法显着优于现有数据重新平衡方法,并可以提高现有最新方法的性能。
translated by 谷歌翻译
使用嘈杂的标签学习是一场实际上有挑战性的弱势监督。在现有文献中,开放式噪声总是被认为是有毒的泛化,类似于封闭式噪音。在本文中,我们经验证明,开放式嘈杂标签可能是无毒的,甚至有利于对固有的嘈杂标签的鲁棒性。灵感来自观察,我们提出了一种简单而有效的正则化,通过将具有动态噪声标签(ODNL)引入培训的开放式样本。使用ODNL,神经网络的额外容量可以在很大程度上以不干扰来自清洁数据的学习模式的方式消耗。通过SGD噪声的镜头,我们表明我们的方法引起的噪音是随机方向,无偏向,这可能有助于模型收敛到最小的最小值,具有卓越的稳定性,并强制执行模型以产生保守预测-of-分配实例。具有各种类型噪声标签的基准数据集的广泛实验结果表明,所提出的方法不仅提高了许多现有的强大算法的性能,而且即使在标签噪声设置中也能实现分配异点检测任务的显着改进。
translated by 谷歌翻译
删除攻击旨在通过略微扰动正确标记的训练示例的特征来大幅恶化学习模型的测试准确性。通过将这种恶意攻击正式地找到特定$ \ infty $ -wassersein球中的最坏情况培训数据,我们表明最小化扰动数据的对抗性风险相当于优化原始数据上的自然风险的上限。这意味着对抗性培训可以作为防止妄想攻击的原则防御。因此,通过普遍训练可以很大程度地回收测试精度。为了进一步了解国防的内部机制,我们披露了对抗性培训可以通过防止学习者过于依赖于自然环境中的非鲁棒特征来抵制妄想扰动。最后,我们将我们的理论调查结果与一系列关于流行的基准数据集进行了补充,这表明防御能够承受六种不同的实际攻击。在面对令人难以闻名的对手时,理论和经验结果投票给逆势训练。
translated by 谷歌翻译
由明确的反对派制作的对抗例子在机器学习中引起了重要的关注。然而,潜在虚假朋友带来的安全风险基本上被忽视了。在本文中,我们揭示了虚伪的例子的威胁 - 最初被错误分类但是虚假朋友扰乱的投入,以强迫正确的预测。虽然这种扰动的例子似乎是无害的,但我们首次指出,它们可能是恶意地用来隐瞒评估期间不合格(即,不如所需)模型的错误。一旦部署者信任虚伪的性能并在真实应用程序中应用“良好的”模型,即使在良性环境中也可能发生意外的失败。更严重的是,这种安全风险似乎是普遍存在的:我们发现许多类型的不合标准模型易受多个数据集的虚伪示例。此外,我们提供了第一次尝试,以称为虚伪风险的公制表征威胁,并试图通过一些对策来规避它。结果表明对策的有效性,即使在自适应稳健的培训之后,风险仍然是不可忽视的。
translated by 谷歌翻译
In this paper, we propose a robust 3D detector, named Cross Modal Transformer (CMT), for end-to-end 3D multi-modal detection. Without explicit view transformation, CMT takes the image and point clouds tokens as inputs and directly outputs accurate 3D bounding boxes. The spatial alignment of multi-modal tokens is performed implicitly, by encoding the 3D points into multi-modal features. The core design of CMT is quite simple while its performance is impressive. CMT obtains 73.0% NDS on nuScenes benchmark. Moreover, CMT has a strong robustness even if the LiDAR is missing. Code will be released at https://github.com/junjie18/CMT.
translated by 谷歌翻译
Different people speak with diverse personalized speaking styles. Although existing one-shot talking head methods have made significant progress in lip sync, natural facial expressions, and stable head motions, they still cannot generate diverse speaking styles in the final talking head videos. To tackle this problem, we propose a one-shot style-controllable talking face generation framework. In a nutshell, we aim to attain a speaking style from an arbitrary reference speaking video and then drive the one-shot portrait to speak with the reference speaking style and another piece of audio. Specifically, we first develop a style encoder to extract dynamic facial motion patterns of a style reference video and then encode them into a style code. Afterward, we introduce a style-controllable decoder to synthesize stylized facial animations from the speech content and style code. In order to integrate the reference speaking style into generated videos, we design a style-aware adaptive transformer, which enables the encoded style code to adjust the weights of the feed-forward layers accordingly. Thanks to the style-aware adaptation mechanism, the reference speaking style can be better embedded into synthesized videos during decoding. Extensive experiments demonstrate that our method is capable of generating talking head videos with diverse speaking styles from only one portrait image and an audio clip while achieving authentic visual effects. Project Page: https://github.com/FuxiVirtualHuman/styletalk.
translated by 谷歌翻译
The visual dimension of cities has been a fundamental subject in urban studies, since the pioneering work of scholars such as Sitte, Lynch, Arnheim, and Jacobs. Several decades later, big data and artificial intelligence (AI) are revolutionizing how people move, sense, and interact with cities. This paper reviews the literature on the appearance and function of cities to illustrate how visual information has been used to understand them. A conceptual framework, Urban Visual Intelligence, is introduced to systematically elaborate on how new image data sources and AI techniques are reshaping the way researchers perceive and measure cities, enabling the study of the physical environment and its interactions with socioeconomic environments at various scales. The paper argues that these new approaches enable researchers to revisit the classic urban theories and themes, and potentially help cities create environments that are more in line with human behaviors and aspirations in the digital age.
translated by 谷歌翻译
Deploying reliable deep learning techniques in interdisciplinary applications needs learned models to output accurate and ({even more importantly}) explainable predictions. Existing approaches typically explicate network outputs in a post-hoc fashion, under an implicit assumption that faithful explanations come from accurate predictions/classifications. We have an opposite claim that explanations boost (or even determine) classification. That is, end-to-end learning of explanation factors to augment discriminative representation extraction could be a more intuitive strategy to inversely assure fine-grained explainability, e.g., in those neuroimaging and neuroscience studies with high-dimensional data containing noisy, redundant, and task-irrelevant information. In this paper, we propose such an explainable geometric deep network dubbed as NeuroExplainer, with applications to uncover altered infant cortical development patterns associated with preterm birth. Given fundamental cortical attributes as network input, our NeuroExplainer adopts a hierarchical attention-decoding framework to learn fine-grained attentions and respective discriminative representations to accurately recognize preterm infants from term-born infants at term-equivalent age. NeuroExplainer learns the hierarchical attention-decoding modules under subject-level weak supervision coupled with targeted regularizers deduced from domain knowledge regarding brain development. These prior-guided constraints implicitly maximizes the explainability metrics (i.e., fidelity, sparsity, and stability) in network training, driving the learned network to output detailed explanations and accurate classifications. Experimental results on the public dHCP benchmark suggest that NeuroExplainer led to quantitatively reliable explanation results that are qualitatively consistent with representative neuroimaging studies.
translated by 谷歌翻译