Surround-view fisheye perception under valet parking scenes is fundamental and crucial in autonomous driving. Environmental conditions in parking lots perform differently from the common public datasets, such as imperfect light and opacity, which substantially impacts on perception performance. Most existing networks based on public datasets may generalize suboptimal results on these valet parking scenes, also affected by the fisheye distortion. In this article, we introduce a new large-scale fisheye dataset called Fisheye Parking Dataset(FPD) to promote the research in dealing with diverse real-world surround-view parking cases. Notably, our compiled FPD exhibits excellent characteristics for different surround-view perception tasks. In addition, we also propose our real-time distortion-insensitive multi-task framework Fisheye Perception Network (FPNet), which improves the surround-view fisheye BEV perception by enhancing the fisheye distortion operation and multi-task lightweight designs. Extensive experiments validate the effectiveness of our approach and the dataset's exceptional generalizability.
translated by 谷歌翻译
Monocular 3D object detection is a low-cost but challenging task, as it requires generating accurate 3D localization solely from a single image input. Recent developed depth-assisted methods show promising results by using explicit depth maps as intermediate features, which are either precomputed by monocular depth estimation networks or jointly evaluated with 3D object detection. However, inevitable errors from estimated depth priors may lead to misaligned semantic information and 3D localization, hence resulting in feature smearing and suboptimal predictions. To mitigate this issue, we propose ADD, an Attention-based Depth knowledge Distillation framework with 3D-aware positional encoding. Unlike previous knowledge distillation frameworks that adopt stereo- or LiDAR-based teachers, we build up our teacher with identical architecture as the student but with extra ground-truth depth as input. Credit to our teacher design, our framework is seamless, domain-gap free, easily implementable, and is compatible with object-wise ground-truth depth. Specifically, we leverage intermediate features and responses for knowledge distillation. Considering long-range 3D dependencies, we propose \emph{3D-aware self-attention} and \emph{target-aware cross-attention} modules for student adaptation. Extensive experiments are performed to verify the effectiveness of our framework on the challenging KITTI 3D object detection benchmark. We implement our framework on three representative monocular detectors, and we achieve state-of-the-art performance with no additional inference computational cost relative to baseline models. Our code is available at https://github.com/rockywind/ADD.
translated by 谷歌翻译
这项工作介绍了一个简单的视觉变压器设计,作为对象本地化和实例分段任务的强大基线。变压器最近在图像分类任务中展示了竞争性能。为了采用对象检测和密集的预测任务,许多作品从卷积网络和高度定制的Vit架构继承了多级设计。在这种设计背后,目标是在计算成本和多尺度全球背景的有效聚合之间进行更好的权衡。然而,现有的作品采用多级架构设计作为黑匣子解决方案,无清楚地了解其真正的益处。在本文中,我们全面研究了三个架构设计选择对vit - 空间减少,加倍的频道和多尺度特征 - 并证明了vanilla vit架构可以在没有手动的多尺度特征的情况下实现这一目标,保持原始的Vit设计哲学。我们进一步完成了缩放规则,以优化模型的准确性和计算成本/型号大小的权衡。通过在整个编码器块中利用恒定的特征分辨率和隐藏大小,我们提出了一种称为通用视觉变压器(UVIT)的简单而紧凑的VIT架构,可实现COCO对象检测和实例分段任务的强劲性能。
translated by 谷歌翻译
对抗性攻击,例如输入和对抗性样本的对抗扰动,对机器学习和深度学习技术构成重大挑战,包括互动推荐系统。这些技术的潜在嵌入空间使对抗性攻击难以在早期阶段检测。最近的因果关系表明,反事实也可以被认为是生成从不同分布所吸引的对抗样本作为训练样本的方法之一。我们建议探索基于强化学习的互动推荐系统的对抗性实例和攻击不可知论。我们首先通过将扰动添加到休闲因素的输入和干预来制造不同类型的对抗例。然后,我们通过基于制备数据检测基于深度学习的分类器的潜在攻击来增强推荐系统。最后,我们研究了对抗性示例的攻击强度和频率,并在具有多种制备方法的标准数据集中评估模型。我们广泛的实验表明,大多数逆势攻击都是有效的,攻击力量和攻击频率都会影响攻击性能。战略性定时攻击仅实现了比较攻击性能,只有1/3到1/2攻击频率。此外,我们的黑匣子探测器用一种制作方法培训,具有概述几种其他制备方法的泛化能力。
translated by 谷歌翻译
零拍摄学习(ZSL)旨在通过语义相关转移观察到的课程的学习知识。有希望的策略是学习一个全球本地代表,将全球信息纳入额外的地方(即输入的小部分/地区)。但是,现有方法根据显式功能发现本地,而无需挖掘区域内部属性和关系。在这项工作中,我们提出了一种新的熵引导的增强部分卷积网络(ERPCNET),其基于没有人为注释区域的语义相关性和视觉相关性地提取和聚集在地区。 ERPCNET使用加强部分卷积和熵指导;它不仅在动态发现全球合作的地方,而且还可以更快地收敛于政策梯度优化。我们通过在ZSL和四个基准数据集中的ZSL和广义零射击学习(GZSL)设置下,通过比较来展示ERPCNET的性能。我们还显示ERPCNet是时间高效,可通过可视化分析来解释。
translated by 谷歌翻译
本文总结了自动驾驶场景中流行的基于锚的探测器的模型改进和推理时间优化。根据专为通用检测场景而设计的高性能RCNN-RS和Verinanet-RS检测框架,我们研究了一组框架改进,以适应探测器以更好地检测人群场景中的小物体。然后,我们通过扩展输入分辨率和模型大小来提出模型缩放策略,以实现更好的速度准确性权衡曲线。我们在Waymo Open数据集(WOD)的实时2D检测轨道上评估了模型家庭。在V100 GPU的70毫秒/帧延迟约束中,我们最大的Cascade RCNN-RS型号可实现76.9%AP/L1和70.1%AP/L2,在WOD实时2D检测中获得新的最新技术。我们最快的视网膜RS模型达到6.3 ms/帧,同时保持合理的检测精度为50.7%AP/L1和42.9%AP/L2。
translated by 谷歌翻译
在本文中,我们从优化的角度研究了对比度学习,旨在分析和解决现有的对比学习方法的基本问题,这些方法依靠大批量大小或大型矢量词典。我们考虑了对比度学习的全球目标,该目标将每个正对与锚点的所有负对对比。从优化的角度来看,我们解释了为什么诸如SIMCLR之类的现有方法需要大批量大小才能获得令人满意的结果。为了消除此类要求,我们提出了一种记忆有效的随机优化算法,用于求解名为SOGCLR的对比度学习的全局目标。我们表明,在足够数量的迭代次数之后,在合理条件下,其优化误差可以忽略不计,或者对于稍有不同的全局对比目标而减少。从经验上讲,我们证明具有小批量大小的SOGCLR(例如256)可以在Imagenet-1k上的自我监督学习任务上获得与具有较大批量大小(例如8192)的SIMCLR相似的性能。我们还试图证明所提出的优化技术是通用的,可以应用于解决其他对比损失,例如双峰对比度学习的双向对比损失。提出的方法是在我们开源的图书馆libauc(www.libauc.org)中实现的。
translated by 谷歌翻译
在本文中,我们探讨了构建统一基础模型的可能性,该模型可以适应愿景和仅文本任务。从BERT和VIT开始,我们设计一个由模态特定标记,共享变压器编码器和任务特定的输出头组成的统一变压器。为了有效地预先列车在未配对的图像和文本上联合培训拟议的模型,我们提出了两种新颖的技术:(i)我们使用单独培训的BERT和VIT模型作为老师,并应用知识蒸馏,为关节提供额外的准确的监督信号训练; (ii)我们提出了一种新颖的渐变掩蔽策略,以平衡图像和文本预培训损失的参数更新。我们通过微调它分别在图像分类任务和自然语言理解任务上进行微调,评估联合预训练的变压器。实验表明,由此产生的统一基础变压器令人惊讶地在视觉和仅文本任务中令人惊讶地令人惊讶,并且所提出的知识蒸馏和梯度掩蔽策略可以有效地提升分别训练模型水平的性能。
translated by 谷歌翻译
In this paper, we propose a robust 3D detector, named Cross Modal Transformer (CMT), for end-to-end 3D multi-modal detection. Without explicit view transformation, CMT takes the image and point clouds tokens as inputs and directly outputs accurate 3D bounding boxes. The spatial alignment of multi-modal tokens is performed implicitly, by encoding the 3D points into multi-modal features. The core design of CMT is quite simple while its performance is impressive. CMT obtains 73.0% NDS on nuScenes benchmark. Moreover, CMT has a strong robustness even if the LiDAR is missing. Code will be released at https://github.com/junjie18/CMT.
translated by 谷歌翻译
Knowledge graphs (KG) have served as the key component of various natural language processing applications. Commonsense knowledge graphs (CKG) are a special type of KG, where entities and relations are composed of free-form text. However, previous works in KG completion and CKG completion suffer from long-tail relations and newly-added relations which do not have many know triples for training. In light of this, few-shot KG completion (FKGC), which requires the strengths of graph representation learning and few-shot learning, has been proposed to challenge the problem of limited annotated data. In this paper, we comprehensively survey previous attempts on such tasks in the form of a series of methods and applications. Specifically, we first introduce FKGC challenges, commonly used KGs, and CKGs. Then we systematically categorize and summarize existing works in terms of the type of KGs and the methods. Finally, we present applications of FKGC models on prediction tasks in different areas and share our thoughts on future research directions of FKGC.
translated by 谷歌翻译