Recently, the use of synthetic training data has been on the rise as it offers correctly labelled datasets at a lower cost. The downside of this technique is that the so-called domain gap between the real target images and synthetic training data leads to a decrease in performance. In this paper, we attempt to provide a holistic overview of how to use synthetic data for object detection. We analyse aspects of generating the data as well as techniques used to train the models. We do so by devising a number of experiments, training models on the Dataset of Industrial Metal Objects (DIMO). This dataset contains both real and synthetic images. The synthetic part has different subsets that are either exact synthetic copies of the real data or are copies with certain aspects randomised. This allows us to analyse what types of variation are good for synthetic training data and which aspects should be modelled to closely match the target data. Furthermore, we investigate what types of training techniques are beneficial towards generalisation to real data, and how to use them. Additionally, we analyse how real images can be leveraged when training on synthetic images. All these experiments are validated on real data and benchmarked to models trained on real data. The results offer a number of interesting takeaways that can serve as basic guidelines for using synthetic data for object detection. Code to reproduce results is available at https://github.com/EDM-Research/DIMO_ObjectDetection.
translated by 谷歌翻译
Realistic synthetic image data rendered from 3D models can be used to augment image sets and train image classification semantic segmentation models. In this work, we explore how high quality physically-based rendering and domain randomization can efficiently create a large synthetic dataset based on production 3D CAD models of a real vehicle. We use this dataset to quantify the effectiveness of synthetic augmentation using U-net and Double-U-net models. We found that, for this domain, synthetic images were an effective technique for augmenting limited sets of real training data. We observed that models trained on purely synthetic images had a very low mean prediction IoU on real validation images. We also observed that adding even very small amounts of real images to a synthetic dataset greatly improved accuracy, and that models trained on datasets augmented with synthetic images were more accurate than those trained on real images alone. Finally, we found that in use cases that benefit from incremental training or model specialization, pretraining a base model on synthetic images provided a sizeable reduction in the training cost of transfer learning, allowing up to 90\% of the model training to be front-loaded.
translated by 谷歌翻译
在非结构化环境中工作的机器人必须能够感知和解释其周围环境。机器人技术领域基于深度学习模型的主要障碍之一是缺乏针对不同工业应用的特定领域标记数据。在本文中,我们提出了一种基于域随机化的SIM2REAL传输学习方法,用于对象检测,可以自动生成任意大小和对象类型的标记的合成数据集。随后,对最先进的卷积神经网络Yolov4进行了训练,以检测不同类型的工业对象。通过提出的域随机化方法,我们可以在零射击和单次转移的情况下分别缩小现实差距,分别达到86.32%和97.38%的MAP50分数,其中包含190个真实图像。在GEFORCE RTX 2080 TI GPU上,数据生成过程的每图像少于0.5 s,培训持续约12H,这使其方便地用于工业使用。我们的解决方案符合工业需求,因为它可以通过仅使用1个真实图像进行培训来可靠地区分相似的对象类别。据我们所知,这是迄今为止满足这些约束的唯一工作。
translated by 谷歌翻译
Bridging the 'reality gap' that separates simulated robotics from experiments on hardware could accelerate robotic research through improved data availability. This paper explores domain randomization, a simple technique for training models on simulated images that transfer to real images by randomizing rendering in the simulator. With enough variability in the simulator, the real world may appear to the model as just another variation. We focus on the task of object localization, which is a stepping stone to general robotic manipulation skills. We find that it is possible to train a real-world object detector that is accurate to 1.5 cm and robust to distractors and partial occlusions using only data from a simulator with non-realistic random textures. To demonstrate the capabilities of our detectors, we show they can be used to perform grasping in a cluttered environment. To our knowledge, this is the first successful transfer of a deep neural network trained only on simulated RGB images (without pre-training on real images) to the real world for the purpose of robotic control.
translated by 谷歌翻译
One of the biggest challenges in machine learning is data collection. Training data is an important part since it determines how the model will behave. In object classification, capturing a large number of images per object and in different conditions is not always possible and can be very time-consuming and tedious. Accordingly, this work explores the creation of artificial images using a game engine to cope with limited data in the training dataset. We combine real and synthetic data to train the object classification engine, a strategy that has shown to be beneficial to increase confidence in the decisions made by the classifier, which is often critical in industrial setups. To combine real and synthetic data, we first train the classifier on a massive amount of synthetic data, and then we fine-tune it on real images. Another important result is that the amount of real images needed for fine-tuning is not very high, reaching top accuracy with just 12 or 24 images per class. This substantially reduces the requirements of capturing a great amount of real data.
translated by 谷歌翻译
近年来,人员检测和人类姿势估计已经取得了很大的进步,通过大规模标记的数据集帮助。但是,这些数据集没有保证或分析人类活动,姿势或情境多样性。此外,隐私,法律,安全和道德问题可能会限制收集更多人类数据的能力。一个新兴的替代方案,用于减轻这些问题的一些问题是合成数据。然而,综合数据生成器的创建令人难以置信的具有挑战性,并防止研究人员探索他们的实用性。因此,我们释放了一个以人为本的合成数据发生器PeoplesAnspeople,它包含模拟就绪3D人类资产,参数化照明和相机系统,并生成2D和3D边界框,实例和语义分段,以及Coco姿态标签。使用PeoplesAnspeople,我们使用Detectron2 KeyPoint R-CNN变体进行基准合成数据训练[1]。我们发现,使用合成数据进行预培训网络和对目标现实世界数据的微调(几次传输到Coco-Person Rain的有限子集[2])导致了60.37 $ 60.37 $的关键点AP( Coco Test-Dev2017)使用相同的实际数据培训的型号优于同一实际数据(35.80美元的Keypoint AP),并使用Imagenet预先培训(Keypoint AP为57.50美元)。这种自由可用的数据发生器应使其在人用于人工以人为主的计算机视野中的临界领域进行实际转移学习的新兴仿真领域。
translated by 谷歌翻译
我们建议使用实例检测(实例检测)的新方法,合成优化的布局,以预处理对象检测器具有合成图像。我们的“固体”方法由两个主要组成部分组成:(1)使用具有优化场景布置的未标记的3D模型生成合成图像;(2)在“实例检测”任务上预修对象检测器 - 给定描绘对象的查询图像,检测目标图像中完全相同对象的所有实例。我们的方法不需要任何语义标签来进行预处理,并允许使用任意,不同的3D模型。对可可的实验表明,通过优化的数据生成和适当的预处理任务,合成数据可以是预处理对象探测器的高效数据。特别是,对渲染图像进行预修会在实际图像上预处理,同时使用明显较少的计算资源,从而实现了性能竞争。代码可在https://github.com/princeton-vl/solid上找到。
translated by 谷歌翻译
Figure. 1. The SYNTHIA Dataset. A sample frame (Left) with its semantic labels (center) and a general view of the city (right).
translated by 谷歌翻译
如今,在人员重新识别(Reid)任务的真实数据面临隐私问题,例如,禁止DataSet Dukemtmc-Reid。因此,收集Reid任务的真实数据变得更难。同时,标签的劳动力成本仍然很高,进一步阻碍了Reid研究的发展。因此,许多方法转向为REID算法生成合成图像作为替代方而不是真实图像。然而,合成和真实图像之间存在不可避免的领域差距。在以前的方法中,生成过程基于虚拟场景,并且无法根据不同的目标实际场景自动更改其合成训练数据。为了处理这个问题,我们提出了一种新颖的目标感知一代管道,以产生称为Tagerson的合成人物图像。具体地,它涉及参数化渲染方法,其中参数是可控的,并且可以根据目标场景调整。在Tagperson中,我们从目标场景中提取信息,并使用它们来控制我们的参数化渲染过程以生成目标感知的合成图像,这将使目标域中的实图像保持较小的间隙。在我们的实验中,我们的目标感知的合成图像可以实现比MSMT17上的广义合成图像更高的性能,即秩1精度的47.5%与40.9%。我们将发布此工具包\脚注{\ noindent代码可用于\ href {https://github.com/tagperson/tagperson-blender} {https://github.com/tagperson/tagperson -brender}}为Reid社区以任何所需味道产生合成图像。
translated by 谷歌翻译
监督学习培训的挑战之一是需要采购大量标记数据。解决这个问题的众所周知的方法是用副本粘贴方式使用合成数据,以便我们切割物体并将它们粘贴到相关的背景上。粘贴对象天真地导致伪像导致模型对实际数据产生差的结果。我们提出了一种在不同背景上干净地粘贴对象的新方法,以便在实际数据上创建的数据集具有竞争性能。主要重点是使用染色处理粘贴物体边界。我们在实例检测和前景分段上显示最先进的结果
translated by 谷歌翻译
我们介绍了一种新的合成数据生成器PSP-HDRI $+$,该$+$被证明是ImageNet和其他大规模合成数据对应物的卓越预训练替代方案。我们证明,使用合成数据的预训练将产生一个更通用的模型,即使在分布外(OOD)集测试时,该模型的性能也比替代方案更好。此外,使用由人关键点估计指标指导的消融研究,具有现成的模型架构,我们展示了如何操纵我们的合成数据生成器以进一步提高模型性能。
translated by 谷歌翻译
Estimating human pose, shape, and motion from images and videos are fundamental challenges with many applications. Recent advances in 2D human pose estimation use large amounts of manually-labeled training data for learning convolutional neural networks (CNNs). Such data is time consuming to acquire and difficult to extend. Moreover, manual labeling of 3D pose, depth and motion is impractical. In this work we present SURREAL (Synthetic hUmans foR REAL tasks): a new large-scale dataset with synthetically-generated but realistic images of people rendered from 3D sequences of human motion capture data. We generate more than 6 million frames together with ground truth pose, depth maps, and segmentation masks. We show that CNNs trained on our synthetic dataset allow for accurate human depth estimation and human part segmentation in real RGB images. Our results and the new dataset open up new possibilities for advancing person analysis using cheap and large-scale synthetic data.
translated by 谷歌翻译
基于深度学习的当前计算机视觉任务需要大量数据,并具有用于模型培训或测试的注释,尤其是在某些密集的估计任务中,例如光流分段和深度估计。实际上,密集估计任务的手动标记非常困难甚至不可能,并且数据集的场景通常仅限于较小的范围,这极大地限制了社区的发展。为了克服这种缺陷,我们提出了一种合成数据集生成方法,以获取无繁重的手动劳动力的可扩展数据集。通过这种方法,我们构建了一个名为Minenavi的数据集,该数据集包含来自飞机的第一镜头视频视频素材,并与准确的地面真相相匹配,以实现飞机导航应用中的深度估算。我们还提供定量实验,以证明通过Minenavi数据集进行预训练可以提高深度估计模型的性能,并加快模型在真实场景数据上的收敛性。由于合成数据集在深层模型的训练过程中与现实世界数据集具有相似的效果,因此我们还提供了具有单眼深度估计方法的其他实验,以证明各种因素在我们的数据集中的影响,例如照明条件和运动模式。
translated by 谷歌翻译
Building instance segmentation models that are dataefficient and can handle rare object categories is an important challenge in computer vision. Leveraging data augmentations is a promising direction towards addressing this challenge. Here, we perform a systematic study of the Copy-Paste augmentation (e.g., [13,12]) for instance segmentation where we randomly paste objects onto an image. Prior studies on Copy-Paste relied on modeling the surrounding visual context for pasting the objects. However, we find that the simple mechanism of pasting objects randomly is good enough and can provide solid gains on top of strong baselines. Furthermore, we show Copy-Paste is additive with semi-supervised methods that leverage extra data through pseudo labeling (e.g. self-training). On COCO instance segmentation, we achieve 49.1 mask AP and 57.3 box AP, an improvement of +0.6 mask AP and +1.5 box AP over the previous state-of-the-art. We further demonstrate that Copy-Paste can lead to significant improvements on the LVIS benchmark. Our baseline model outperforms the LVIS 2020 Challenge winning entry by +3.6 mask AP on rare categories.
translated by 谷歌翻译
我们介绍了一种有效的策略来产生可用于培训深层学习模型的培养皿的微生物图像的合成数据集。开发的发电机采用传统的计算机视觉算法以及用于数据增强的神经风格传输方法。我们表明该方法能够合成可用于培训能够定位,分割和分类五种不同微生物物种的神经网络模型的现实看起来的数据集。我们的方法需要更少的资源来获取有用的数据集,而不是收集和标记具有注释的整个大型真实图像。我们表明,只有100个真实图像开始,我们可以生成数据以培训一个探测器,该探测器实现了相同的探测器,而是在真实的,几十次更大的数据集上培训。我们证明了微生物检测和分割方法的有用性,但我们预计它是一般而灵活的,也可以适用于其他科学和工业领域来检测各种物体。
translated by 谷歌翻译
婴儿运动分析是在儿童早期开发研究中具有重要意义的主题。然而,虽然人类姿势估计的应用变得越来越宽,但是在大规模成年姿势数据集上培训的模型几乎不能在估计婴幼儿姿势,因为它们的身体比率显着差异以及它们的构成的多功能性。此外,隐私和安全考虑因素阻碍了从头划痕培训强大模型所需的适当婴儿姿势数据的可用性。为了解决这个问题,本文提出(1)建立和公开发布具有小但不同实际婴儿图像的混合综合和真正的婴儿姿势(Syrip)数据集以及生成的合成婴儿姿势和(2)多级不变表示学习策略可以将知识从成人姿势和合成婴儿图像的相邻域和综合性婴儿图像转移到我们的微调域适应婴儿姿势(FIDEP)估计模型中。在我们的消融研究中,具有相同的网络结构,在SyRip数据集上培训的模型对唯一的其他公共婴儿姿势数据集接受过的培训明显改进。与具有不同复杂性的姿势估计骨干网络集成,FIDEP比这些模型的微调版本始终如一。我们最先进的暗影模型上最好的婴儿姿势估计表演者显示了93.6的平均平均精度(MAP)。
translated by 谷歌翻译
The goal of this paper is to estimate the 6D pose and dimensions of unseen object instances in an RGB-D image. Contrary to "instance-level" 6D pose estimation tasks, our problem assumes that no exact object CAD models are available during either training or testing time. To handle different and unseen object instances in a given category, we introduce Normalized Object Coordinate Space (NOCS)-a shared canonical representation for all possible object instances within a category. Our region-based neural network is then trained to directly infer the correspondence from observed pixels to this shared object representation (NOCS) along with other object information such as class label and instance mask. These predictions can be combined with the depth map to jointly estimate the metric 6D pose and dimensions of multiple objects in a cluttered scene. To train our network, we present a new contextaware technique to generate large amounts of fully annotated mixed reality data. To further improve our model and evaluate its performance on real data, we also provide a fully annotated real-world dataset with large environment and instance variation. Extensive experiments demonstrate that the proposed method is able to robustly estimate the pose and size of unseen object instances in real environments while also achieving state-of-the-art performance on standard 6D pose estimation benchmarks.
translated by 谷歌翻译
获取数据以培训基于深入的学习的对象探测器(无人机)昂贵,耗时,甚至可以在特定环境中禁止。另一方面,合成数据快速且便宜。在这项工作中,我们探讨了在各种应用环境中从UVS探讨了对象检测中的合成数据。为此,我们将开源框架DeepGtav扩展到UAV方案的工作。我们在多个域中捕获各种大规模的高分辨率合成数据集,以通过分析多种型号的多种培训策略来展示它们在真实对象检测中的使用。此外,我们分析了几种不同的数据生成和采样参数,以提供可操作的工程建议,以获得进一步的科学研究。DeepGTAV框架可在https://git.io/jyf5j提供。
translated by 谷歌翻译
我们介绍了一种基于深度学习的方法,用于将空间变化的视觉材料属性(例如纹理地图或图像样式)传播到相同或类似材料的较大样本。为培训,我们利用在多个照明和专用数据增强策略下采取的材料的图像,使转移到新颖的照明条件和仿射变形。我们的模型依赖于监督的图像到图像转换框架,并且对转移域名不可知;我们展示了语义分割,普通地图和程式化。在图像类比方法之后,该方法仅需要训练数据包含与输入引导相同的视觉结构。我们的方法采用交互式速率,使其适用于材料编辑应用。我们在受控设置中彻底评估了我们的学习方法,提供了性能的定量测量。最后,我们证明训练单个材料上的模型足以推广到相同类型的材料,而无需大量数据集。
translated by 谷歌翻译
最新的6D对象构成估计方法,包括无监督的方法,需要许多真实的训练图像。不幸的是,对于某些应用,例如在空间或深水下的应用程序,几乎是不可能获取真实图像的,即使是未注释的。在本文中,我们提出了一种可以仅在合成图像上训练的方法,也可以选择使用一些其他真实图像。鉴于从第一个网络获得的粗糙姿势估计,它使用第二个网络来预测使用粗糙姿势和真实图像呈现的图像之间的密集2D对应场,并渗透了所需的姿势校正。与最新方法相比,这种方法对合成图像和真实图像之间的域变化敏感得多。它与需要注释的真实图像进行训练时的方法表现出色,并且在使用二十个真实图像的情况下,它们的表现要优于它们。
translated by 谷歌翻译