我们介绍了一种新的合成数据生成器PSP-HDRI $+$,该$+$被证明是ImageNet和其他大规模合成数据对应物的卓越预训练替代方案。我们证明,使用合成数据的预训练将产生一个更通用的模型,即使在分布外(OOD)集测试时,该模型的性能也比替代方案更好。此外,使用由人关键点估计指标指导的消融研究,具有现成的模型架构,我们展示了如何操纵我们的合成数据生成器以进一步提高模型性能。
translated by 谷歌翻译
近年来,人员检测和人类姿势估计已经取得了很大的进步,通过大规模标记的数据集帮助。但是,这些数据集没有保证或分析人类活动,姿势或情境多样性。此外,隐私,法律,安全和道德问题可能会限制收集更多人类数据的能力。一个新兴的替代方案,用于减轻这些问题的一些问题是合成数据。然而,综合数据生成器的创建令人难以置信的具有挑战性,并防止研究人员探索他们的实用性。因此,我们释放了一个以人为本的合成数据发生器PeoplesAnspeople,它包含模拟就绪3D人类资产,参数化照明和相机系统,并生成2D和3D边界框,实例和语义分段,以及Coco姿态标签。使用PeoplesAnspeople,我们使用Detectron2 KeyPoint R-CNN变体进行基准合成数据训练[1]。我们发现,使用合成数据进行预培训网络和对目标现实世界数据的微调(几次传输到Coco-Person Rain的有限子集[2])导致了60.37 $ 60.37 $的关键点AP( Coco Test-Dev2017)使用相同的实际数据培训的型号优于同一实际数据(35.80美元的Keypoint AP),并使用Imagenet预先培训(Keypoint AP为57.50美元)。这种自由可用的数据发生器应使其在人用于人工以人为主的计算机视野中的临界领域进行实际转移学习的新兴仿真领域。
translated by 谷歌翻译
我们建议使用实例检测(实例检测)的新方法,合成优化的布局,以预处理对象检测器具有合成图像。我们的“固体”方法由两个主要组成部分组成:(1)使用具有优化场景布置的未标记的3D模型生成合成图像;(2)在“实例检测”任务上预修对象检测器 - 给定描绘对象的查询图像,检测目标图像中完全相同对象的所有实例。我们的方法不需要任何语义标签来进行预处理,并允许使用任意,不同的3D模型。对可可的实验表明,通过优化的数据生成和适当的预处理任务,合成数据可以是预处理对象探测器的高效数据。特别是,对渲染图像进行预修会在实际图像上预处理,同时使用明显较少的计算资源,从而实现了性能竞争。代码可在https://github.com/princeton-vl/solid上找到。
translated by 谷歌翻译
We report competitive results on object detection and instance segmentation on the COCO dataset using standard models trained from random initialization. The results are no worse than their ImageNet pre-training counterparts even when using the hyper-parameters of the baseline system (Mask R-CNN) that were optimized for fine-tuning pretrained models, with the sole exception of increasing the number of training iterations so the randomly initialized models may converge. Training from random initialization is surprisingly robust; our results hold even when: (i) using only 10% of the training data, (ii) for deeper and wider models, and (iii) for multiple tasks and metrics. Experiments show that ImageNet pre-training speeds up convergence early in training, but does not necessarily provide regularization or improve final target task accuracy. To push the envelope we demonstrate 50.9 AP on COCO object detection without using any external data-a result on par with the top COCO 2017 competition results that used ImageNet pre-training. These observations challenge the conventional wisdom of ImageNet pre-training for dependent tasks and we expect these discoveries will encourage people to rethink the current de facto paradigm of 'pretraining and fine-tuning' in computer vision.
translated by 谷歌翻译
Recently, the use of synthetic training data has been on the rise as it offers correctly labelled datasets at a lower cost. The downside of this technique is that the so-called domain gap between the real target images and synthetic training data leads to a decrease in performance. In this paper, we attempt to provide a holistic overview of how to use synthetic data for object detection. We analyse aspects of generating the data as well as techniques used to train the models. We do so by devising a number of experiments, training models on the Dataset of Industrial Metal Objects (DIMO). This dataset contains both real and synthetic images. The synthetic part has different subsets that are either exact synthetic copies of the real data or are copies with certain aspects randomised. This allows us to analyse what types of variation are good for synthetic training data and which aspects should be modelled to closely match the target data. Furthermore, we investigate what types of training techniques are beneficial towards generalisation to real data, and how to use them. Additionally, we analyse how real images can be leveraged when training on synthetic images. All these experiments are validated on real data and benchmarked to models trained on real data. The results offer a number of interesting takeaways that can serve as basic guidelines for using synthetic data for object detection. Code to reproduce results is available at https://github.com/EDM-Research/DIMO_ObjectDetection.
translated by 谷歌翻译
Transfer of pre-trained representations improves sample efficiency and simplifies hyperparameter tuning when training deep neural networks for vision. We revisit the paradigm of pre-training on large supervised datasets and fine-tuning the model on a target task. We scale up pre-training, and propose a simple recipe that we call Big Transfer (BiT). By combining a few carefully selected components, and transferring using a simple heuristic, we achieve strong performance on over 20 datasets. BiT performs well across a surprisingly wide range of data regimes -from 1 example per class to 1 M total examples. BiT achieves 87.5% top-1 accuracy on ILSVRC-2012, 99.4% on CIFAR-10, and 76.3% on the 19 task Visual Task Adaptation Benchmark (VTAB). On small datasets, BiT attains 76.8% on ILSVRC-2012 with 10 examples per class, and 97.0% on CIFAR-10 with 10 examples per class. We conduct detailed analysis of the main components that lead to high transfer performance.
translated by 谷歌翻译
婴儿运动分析是在儿童早期开发研究中具有重要意义的主题。然而,虽然人类姿势估计的应用变得越来越宽,但是在大规模成年姿势数据集上培训的模型几乎不能在估计婴幼儿姿势,因为它们的身体比率显着差异以及它们的构成的多功能性。此外,隐私和安全考虑因素阻碍了从头划痕培训强大模型所需的适当婴儿姿势数据的可用性。为了解决这个问题,本文提出(1)建立和公开发布具有小但不同实际婴儿图像的混合综合和真正的婴儿姿势(Syrip)数据集以及生成的合成婴儿姿势和(2)多级不变表示学习策略可以将知识从成人姿势和合成婴儿图像的相邻域和综合性婴儿图像转移到我们的微调域适应婴儿姿势(FIDEP)估计模型中。在我们的消融研究中,具有相同的网络结构,在SyRip数据集上培训的模型对唯一的其他公共婴儿姿势数据集接受过的培训明显改进。与具有不同复杂性的姿势估计骨干网络集成,FIDEP比这些模型的微调版本始终如一。我们最先进的暗影模型上最好的婴儿姿势估计表演者显示了93.6的平均平均精度(MAP)。
translated by 谷歌翻译
The success of deep learning in vision can be attributed to: (a) models with high capacity; (b) increased computational power; and (c) availability of large-scale labeled data. Since 2012, there have been significant advances in representation capabilities of the models and computational capabilities of GPUs. But the size of the biggest dataset has surprisingly remained constant. What will happen if we increase the dataset size by 10× or 100×? This paper takes a step towards clearing the clouds of mystery surrounding the relationship between 'enormous data' and visual deep learning. By exploiting the JFT-300M dataset which has more than 375M noisy labels for 300M images, we investigate how the performance of current vision tasks would change if this data was used for representation learning. Our paper delivers some surprising (and some expected) findings. First, we find that the performance on vision tasks increases logarithmically based on volume of training data size. Second, we show that representation learning (or pretraining) still holds a lot of promise. One can improve performance on many vision tasks by just training a better base model. Finally, as expected, we present new state-of-theart results for different vision tasks including image classification, object detection, semantic segmentation and human pose estimation. Our sincere hope is that this inspires vision community to not undervalue the data and develop collective efforts in building larger datasets.
translated by 谷歌翻译
在非结构化环境中工作的机器人必须能够感知和解释其周围环境。机器人技术领域基于深度学习模型的主要障碍之一是缺乏针对不同工业应用的特定领域标记数据。在本文中,我们提出了一种基于域随机化的SIM2REAL传输学习方法,用于对象检测,可以自动生成任意大小和对象类型的标记的合成数据集。随后,对最先进的卷积神经网络Yolov4进行了训练,以检测不同类型的工业对象。通过提出的域随机化方法,我们可以在零射击和单次转移的情况下分别缩小现实差距,分别达到86.32%和97.38%的MAP50分数,其中包含190个真实图像。在GEFORCE RTX 2080 TI GPU上,数据生成过程的每图像少于0.5 s,培训持续约12H,这使其方便地用于工业使用。我们的解决方案符合工业需求,因为它可以通过仅使用1个真实图像进行培训来可靠地区分相似的对象类别。据我们所知,这是迄今为止满足这些约束的唯一工作。
translated by 谷歌翻译
在Imagenet或其他大规模数据数据上的预培训模型导致计算机愿景的主要进步,尽管伴随着与策划成本,隐私,使用权和道德问题相关的缺点。在本文中,我们首次研究了基于由图形模拟器生成的合成数据到来自非常不同的域的下游任务的培训模型的可转换性。在使用此类合成数据进行预培训时,我们发现不同任务的下游性能受到不同配置的不同配置(例如,照明,对象姿势,背景等),并且没有单尺寸适合 - 所有解决方案。因此,更好地将合成的预训练数据量身定制到特定的下游任务,以获得最佳性能。我们介绍Task2SIM,一个统一的模型将下游任务表示映射到最佳模拟参数,以为它们生成合成的预训练数据。 Task2SIM通过培训学习此映射,以查找一组“看到”任务上的最佳参数集。曾经训练过,它可以用于预测一个新颖的“看不见”任务的最佳仿真参数,而无需额外的培训。鉴于每级图像数量的预算,我们具有20个不同的下游任务的广泛实验,显示了Task2SIM的任务 - 自适应预训练数据导致明显更好的下游性能,而不是在看见和看不见的任务上的非自适应选择模拟参数。它甚至是竞争对手的真实图像的竞争力。
translated by 谷歌翻译
获取数据以培训基于深入的学习的对象探测器(无人机)昂贵,耗时,甚至可以在特定环境中禁止。另一方面,合成数据快速且便宜。在这项工作中,我们探讨了在各种应用环境中从UVS探讨了对象检测中的合成数据。为此,我们将开源框架DeepGtav扩展到UAV方案的工作。我们在多个域中捕获各种大规模的高分辨率合成数据集,以通过分析多种型号的多种培训策略来展示它们在真实对象检测中的使用。此外,我们分析了几种不同的数据生成和采样参数,以提供可操作的工程建议,以获得进一步的科学研究。DeepGTAV框架可在https://git.io/jyf5j提供。
translated by 谷歌翻译
深度神经网络在人类分析中已经普遍存在,增强了应用的性能,例如生物识别识别,动作识别以及人重新识别。但是,此类网络的性能通过可用的培训数据缩放。在人类分析中,对大规模数据集的需求构成了严重的挑战,因为数据收集乏味,廉价,昂贵,并且必须遵守数据保护法。当前的研究研究了\ textit {合成数据}的生成,作为在现场收集真实数据的有效且具有隐私性的替代方案。这项调查介绍了基本定义和方法,在生成和采用合成数据进行人类分析时必不可少。我们进行了一项调查,总结了当前的最新方法以及使用合成数据的主要好处。我们还提供了公开可用的合成数据集和生成模型的概述。最后,我们讨论了该领域的局限性以及开放研究问题。这项调查旨在为人类分析领域的研究人员和从业人员提供。
translated by 谷歌翻译
This paper presents a new large scale multi-person tracking dataset -- \texttt{PersonPath22}, which is over an order of magnitude larger than currently available high quality multi-object tracking datasets such as MOT17, HiEve, and MOT20 datasets. The lack of large scale training and test data for this task has limited the community's ability to understand the performance of their tracking systems on a wide range of scenarios and conditions such as variations in person density, actions being performed, weather, and time of day. \texttt{PersonPath22} dataset was specifically sourced to provide a wide variety of these conditions and our annotations include rich meta-data such that the performance of a tracker can be evaluated along these different dimensions. The lack of training data has also limited the ability to perform end-to-end training of tracking systems. As such, the highest performing tracking systems all rely on strong detectors trained on external image datasets. We hope that the release of this dataset will enable new lines of research that take advantage of large scale video based training data.
translated by 谷歌翻译
将简单的体系结构与大规模预训练相结合已导致图像分类的大量改进。对于对象检测,预训练和缩放方法的确定性不佳,尤其是在长尾和开放式摄影的环境中,训练数据相对较少。在本文中,我们提出了一个强大的配方,用于将图像文本模型转移到开放式对象检测中。我们使用具有最小修改,对比度文本预训练和端到端检测微调的标准视觉变压器体系结构。我们对该设置的缩放属性的分析表明,增加图像级预训练和模型大小在下游检测任务上产生一致的改进。我们提供适应性策略和正规化,以实现零击文本条件和单次图像条件对象检测的非常强劲的性能。代码和型号可在GitHub上找到。
translated by 谷歌翻译
Estimating human pose, shape, and motion from images and videos are fundamental challenges with many applications. Recent advances in 2D human pose estimation use large amounts of manually-labeled training data for learning convolutional neural networks (CNNs). Such data is time consuming to acquire and difficult to extend. Moreover, manual labeling of 3D pose, depth and motion is impractical. In this work we present SURREAL (Synthetic hUmans foR REAL tasks): a new large-scale dataset with synthetically-generated but realistic images of people rendered from 3D sequences of human motion capture data. We generate more than 6 million frames together with ground truth pose, depth maps, and segmentation masks. We show that CNNs trained on our synthetic dataset allow for accurate human depth estimation and human part segmentation in real RGB images. Our results and the new dataset open up new possibilities for advancing person analysis using cheap and large-scale synthetic data.
translated by 谷歌翻译
The 1$^{\text{st}}$ Workshop on Maritime Computer Vision (MaCVi) 2023 focused on maritime computer vision for Unmanned Aerial Vehicles (UAV) and Unmanned Surface Vehicle (USV), and organized several subchallenges in this domain: (i) UAV-based Maritime Object Detection, (ii) UAV-based Maritime Object Tracking, (iii) USV-based Maritime Obstacle Segmentation and (iv) USV-based Maritime Obstacle Detection. The subchallenges were based on the SeaDronesSee and MODS benchmarks. This report summarizes the main findings of the individual subchallenges and introduces a new benchmark, called SeaDronesSee Object Detection v2, which extends the previous benchmark by including more classes and footage. We provide statistical and qualitative analyses, and assess trends in the best-performing methodologies of over 130 submissions. The methods are summarized in the appendix. The datasets, evaluation code and the leaderboard are publicly available at https://seadronessee.cs.uni-tuebingen.de/macvi.
translated by 谷歌翻译
Segmenting humans in 3D indoor scenes has become increasingly important with the rise of human-centered robotics and AR/VR applications. In this direction, we explore the tasks of 3D human semantic-, instance- and multi-human body-part segmentation. Few works have attempted to directly segment humans in point clouds (or depth maps), which is largely due to the lack of training data on humans interacting with 3D scenes. We address this challenge and propose a framework for synthesizing virtual humans in realistic 3D scenes. Synthetic point cloud data is attractive since the domain gap between real and synthetic depth is small compared to images. Our analysis of different training schemes using a combination of synthetic and realistic data shows that synthetic data for pre-training improves performance in a wide variety of segmentation tasks and models. We further propose the first end-to-end model for 3D multi-human body-part segmentation, called Human3D, that performs all the above segmentation tasks in a unified manner. Remarkably, Human3D even outperforms previous task-specific state-of-the-art methods. Finally, we manually annotate humans in test scenes from EgoBody to compare the proposed training schemes and segmentation models.
translated by 谷歌翻译
Building instance segmentation models that are dataefficient and can handle rare object categories is an important challenge in computer vision. Leveraging data augmentations is a promising direction towards addressing this challenge. Here, we perform a systematic study of the Copy-Paste augmentation (e.g., [13,12]) for instance segmentation where we randomly paste objects onto an image. Prior studies on Copy-Paste relied on modeling the surrounding visual context for pasting the objects. However, we find that the simple mechanism of pasting objects randomly is good enough and can provide solid gains on top of strong baselines. Furthermore, we show Copy-Paste is additive with semi-supervised methods that leverage extra data through pseudo labeling (e.g. self-training). On COCO instance segmentation, we achieve 49.1 mask AP and 57.3 box AP, an improvement of +0.6 mask AP and +1.5 box AP over the previous state-of-the-art. We further demonstrate that Copy-Paste can lead to significant improvements on the LVIS benchmark. Our baseline model outperforms the LVIS 2020 Challenge winning entry by +3.6 mask AP on rare categories.
translated by 谷歌翻译
人类姿势信息是许多下游图像处理任务中的关键组成部分,例如活动识别和运动跟踪。同样地,所示字符域的姿势估计器将在辅助内容创建任务中提供有价值的,例如参考姿势检索和自动字符动画。但是,虽然现代数据驱动技术在自然图像上具有显着提高的姿态估计性能,但是对于插图来说已经完成了很少的工作。在我们的工作中,我们通过从域特定的和任务特定的源模型有效地学习来弥合这个域名差距。此外,我们还升级和展开现有的所示姿势估计数据集,并引入两个用于分类和分段子任务的新数据集。然后,我们应用所产生的最先进的角色姿势估算器来解决姿势引导例证检索的新颖任务。所有数据,模型和代码都将公开可用。
translated by 谷歌翻译
具有丰富注释的高质量结构化数据是处理道路场景的智能车辆系统中的关键组件。但是,数据策展和注释需要大量投资并产生低多样性的情况。最近对合成数据的兴趣日益增长,提出了有关此类系统改进范围的问题,以及产生大量和变化的模拟数据所需的手动工作量。这项工作提出了一条合成数据生成管道,该管道利用现有数据集(如Nuscenes)来解决模拟数据集中存在的困难和域间隙。我们表明,使用现有数据集的注释和视觉提示,我们可以促进自动化的多模式数据生成,模仿具有高保真性的真实场景属性,以及以物理意义的方式使样本多样化的机制。我们通过提供定性和定量实验,并通过使用真实和合成数据来证明MIOU指标的改进,以实现CityScapes和Kitti-Step数据集的语义分割。所有相关代码和数据均在GitHub(https://github.com/shubham1810/trove_toolkit)上发布。
translated by 谷歌翻译