Data augmentation (DA) is indispensable in modern machine learning and deep neural networks. The basic idea of DA is to construct new training data to improve the model's generalization by adding slightly disturbed versions of existing data or synthesizing new data. In this work, we review a small but essential subset of DA -- Mix-based Data Augmentation (MixDA) that generates novel samples by mixing multiple examples. Unlike conventional DA approaches based on a single-sample operation or requiring domain knowledge, MixDA is more general in creating a broad spectrum of new data and has received increasing attention in the community. We begin with proposing a new taxonomy classifying MixDA into, Mixup-based, Cutmix-based, and hybrid approaches according to a hierarchical view of the data mix. Various MixDA techniques are then comprehensively reviewed in a more fine-grained way. Owing to its generalization, MixDA has penetrated a variety of applications which are also completely reviewed in this work. We also examine why MixDA works from different aspects of improving model performance, generalization, and calibration while explaining the model behavior based on the properties of MixDA. Finally, we recapitulate the critical findings and fundamental challenges of current MixDA studies, and outline the potential directions for future works. Different from previous related works that summarize the DA approaches in a specific domain (e.g., images or natural language processing) or only review a part of MixDA studies, we are the first to provide a systematical survey of MixDA in terms of its taxonomy, methodology, applications, and explainability. This work can serve as a roadmap to MixDA techniques and application reviews while providing promising directions for researchers interested in this exciting area.
translated by 谷歌翻译
Designing safety-critical control for robotic manipulators is challenging, especially in a cluttered environment. First, the actual trajectory of a manipulator might deviate from the planned one due to the complex collision environments and non-trivial dynamics, leading to collision; Second, the feasible space for the manipulator is hard to obtain since the explicit distance functions between collision meshes are unknown. By analyzing the relationship between the safe set and the controlled invariant set, this paper proposes a data-driven control barrier function (CBF) construction method, which extracts CBF from distance samples. Specifically, the CBF guarantees the controlled invariant property for considering the system dynamics. The data-driven method samples the distance function and determines the safe set. Then, the CBF is synthesized based on the safe set by a scenario-based sum of square (SOS) program. Unlike most existing linearization based approaches, our method reserves the volume of the feasible space for planning without approximation, which helps find a solution in a cluttered environment. The control law is obtained by solving a CBF-based quadratic program in real time, which works as a safe filter for the desired planning-based controller. Moreover, our method guarantees safety with the proven probabilistic result. Our method is validated on a 7-DOF manipulator in both real and virtual cluttered environments. The experiments show that the manipulator is able to execute tasks where the clearance between obstacles is in millimeters.
translated by 谷歌翻译
Recently, the dominant DETR-based approaches apply central-concept spatial prior to accelerate Transformer detector convergency. These methods gradually refine the reference points to the center of target objects and imbue object queries with the updated central reference information for spatially conditional attention. However, centralizing reference points may severely deteriorate queries' saliency and confuse detectors due to the indiscriminative spatial prior. To bridge the gap between the reference points of salient queries and Transformer detectors, we propose SAlient Point-based DETR (SAP-DETR) by treating object detection as a transformation from salient points to instance objects. In SAP-DETR, we explicitly initialize a query-specific reference point for each object query, gradually aggregate them into an instance object, and then predict the distance from each side of the bounding box to these points. By rapidly attending to query-specific reference region and other conditional extreme regions from the image features, SAP-DETR can effectively bridge the gap between the salient point and the query-based Transformer detector with a significant convergency speed. Our extensive experiments have demonstrated that SAP-DETR achieves 1.4 times convergency speed with competitive performance. Under the standard training scheme, SAP-DETR stably promotes the SOTA approaches by 1.0 AP. Based on ResNet-DC-101, SAP-DETR achieves 46.9 AP.
translated by 谷歌翻译
在鸟眼中学习强大的表现(BEV),以进行感知任务,这是趋势和吸引行业和学术界的广泛关注。大多数自动驾驶算法的常规方法在正面或透视视图中执行检测,细分,跟踪等。随着传感器配置变得越来越复杂,从不同的传感器中集成了多源信息,并在统一视图中代表功能至关重要。 BEV感知继承了几个优势,因为代表BEV中的周围场景是直观和融合友好的。对于BEV中的代表对象,对于随后的模块,如计划和/或控制是最可取的。 BEV感知的核心问题在于(a)如何通过从透视视图到BEV来通过视图转换来重建丢失的3D信息; (b)如何在BEV网格中获取地面真理注释; (c)如何制定管道以合并来自不同来源和视图的特征; (d)如何适应和概括算法作为传感器配置在不同情况下各不相同。在这项调查中,我们回顾了有关BEV感知的最新工作,并对不同解决方案进行了深入的分析。此外,还描述了该行业的BEV方法的几种系统设计。此外,我们推出了一套完整的实用指南,以提高BEV感知任务的性能,包括相机,激光雷达和融合输入。最后,我们指出了该领域的未来研究指示。我们希望该报告能阐明社区,并鼓励对BEV感知的更多研究。我们保留一个活跃的存储库来收集最新的工作,并在https://github.com/openperceptionx/bevperception-survey-recipe上提供一包技巧的工具箱。
translated by 谷歌翻译
大多数避免障碍算法仅在特定环境中有效,并且对某些新环境的适应性较低。在本文中,我们提出了一种轨迹学习(TL)的避免算法,该算法可以从一般障碍避免算法产生的轨迹中学习隐式避免机制,并实现更好的适应性。具体而言,我们定义了一个通用数据结构来描述避免障碍机制。基于这种结构,我们将学习障碍算法的学习转换为有关方向选择的多类分类问题。然后,我们设计一个人工神经网络(ANN),以通过监督学习来拟合多类分类功能,并最终获得产生观察到的轨迹的障碍物避免机制。我们的算法可以获得类似于轨迹中所示的障碍机制,并且适合看不见的环境。自动学习机制简化了应用程序中障碍算法的修改和调试。仿真结果表明,所提出的算法可以从轨迹学习避免障碍策略并获得更好的适应性。
translated by 谷歌翻译
汽车之后(CF)建模,模拟人类CF行为的重要组成部分,在过去几十年中吸引了越来越多的研究兴趣。本文通过提出一种新型生成混合CF模型推动了现有技术,这在表征动态人类CF行为方面实现了高精度,并且能够为任何特定的人观察甚至不观察到的驾驶风格产生现实的人类CF行为。具体地,通过使用时变参数设计和校准智能驱动程序模型(IDM)来确保精确捕获人CF行为的能力。后面的原因是这种时变参数可以表达驱动器间异质性,即不同驱动器的不同驱动方式,以及驱动器内异质性,即改变同一驱动器的驱动样式。通过应用基于神经过程(NP)的模型来实现产生任何给定观察样式的现实人类CF行为的能力。通过探索校准的时变IDM参数与NP中间变量之间的关系来支持推断出不观察到的驱动风格的CF行为的能力。为了展示我们提出的模型的有效性,我们进行了广泛的实验和比较,包括CF模型参数校准,CF行为预测和不同驾驶风格的轨迹模拟。
translated by 谷歌翻译
最近,立体声匹配基准的记录由端到端视差网络不断破碎。但是,这些深层模型的域适应能力非常有限。解决此类问题,我们提出了一种名为ADASTEREO的新型域自适应方法,该方法旨在对准深度立体声匹配网络的多级表示。与以前的方法相比,我们的ADASTEREO实现了更标准,完整有效的域适应管道。首先,我们提出了一种用于输入图像级对准的非对抗渐进颜色传输算法。其次,我们设计一个有效的无参数成本归一化层,用于内部特征级别对齐。最后,提出了一种高效的辅助任务,自我监督的遮挡感知重建以缩小输出空间中的间隙。我们进行密集的消融研究和分解比较,以验证每个提出的模块的有效性。没有额外推断开销,只有略微增加训练复杂性,我们的Adastereo模型在多个基准上实现了最先进的跨领域性能,包括Kitti,Middrbury,Eth3D和驾驶员,甚至优于一些状态 - 与目标域的地面真相Fineetuned的差异网络。此外,基于两个额外的评估指标,从更多的观点进一步揭示了我们域 - 自适应立体声匹配管道的优越性。最后,我们证明我们的方法对各种域适配设置具有强大,并且可以轻松地集成到快速适应应用方案和现实世界部署中。
translated by 谷歌翻译
变压器是一种基于关注的编码器解码器架构,彻底改变了自然语言处理领域。灵感来自这一重大成就,最近在将变形式架构调整到计算机视觉(CV)领域的一些开创性作品,这已经证明了他们对各种简历任务的有效性。依靠竞争力的建模能力,与现代卷积神经网络相比在本文中,我们已经为三百不同的视觉变压器进行了全面的审查,用于三个基本的CV任务(分类,检测和分割),提出了根据其动机,结构和使用情况组织这些方法的分类。 。由于培训设置和面向任务的差异,我们还在不同的配置上进行了评估了这些方法,以便于易于和直观的比较而不是各种基准。此外,我们已经揭示了一系列必不可少的,但可能使变压器能够从众多架构中脱颖而出,例如松弛的高级语义嵌入,以弥合视觉和顺序变压器之间的差距。最后,提出了三个未来的未来研究方向进行进一步投资。
translated by 谷歌翻译
视频异常检测(VAD)在视频分析中一直是一个重要的主题。由于异常往往很少,它通常在半监督设置下解决,这需要使用纯正普通视频进行培训。为了避免疲惫的手动标签,我们受到人类感知异常的启发,并提出了一种使无人监督和端到端的VAD能够的主体框架。该框架基于两个关键观察:1)人类感知通常是局部的,即在感应异常时聚焦在局部前景及其背景下。因此,我们建议通过用通用知识定位前景,并设计一个区域本地化策略来利用本地背景。 2)经常发生的事件将塑造人类的常态定义,这激励我们设计了代理培训范式。它列举了一个深度神经网络(DNN)来学习使用未标记的视频的代理任务,并且经常发生的事件将在“模制”DNN中发挥主导作用。通过这种方式,培训损失差距将自动表现出很少看到的新颖事件作为异常。为了实施,我们探索各种代理任务以及经典和新兴DNN模型。对常用VAD基准的广泛评估使框架适用于不同代理任务或DNN模型,并证明其惊人的效果:它不仅优于现有的无监督解决方案,宽边值(8%至10%的AUROC增益),还达到了对最先进的半监督对手进行了可比或甚至卓越的性能。
translated by 谷歌翻译
由于其在各种领域的广泛应用,3D对象检测正在接受行业和学术界的增加。在本文中,我们提出了从点云的3D对象检测的基于角度基于卷曲区域的卷积神经网络(PV-RCNNS)。首先,我们提出了一种新颖的3D探测器,PV-RCNN,由两个步骤组成:Voxel-to-keyPoint场景编码和Keypoint-to-Grid ROI特征抽象。这两个步骤深入地将3D体素CNN与基于点的集合的集合进行了集成,以提取辨别特征。其次,我们提出了一个先进的框架,PV-RCNN ++,用于更高效和准确的3D对象检测。它由两个主要的改进组成:有效地生产更多代表性关键点的划分的提案中心策略,以及用于更好地聚合局部点特征的vectorpool聚合,具有更少的资源消耗。通过这两种策略,我们的PV-RCNN ++比PV-RCNN快2倍,同时还在具有150米* 150M检测范围内的大型Waymo Open DataSet上实现更好的性能。此外,我们提出的PV-RCNNS在Waymo Open DataSet和高竞争力的基蒂基准上实现最先进的3D检测性能。源代码可在https://github.com/open-mmlab/openpcdet上获得。
translated by 谷歌翻译