近年来,已经开发了几种无监督和自我监督的方法,以从大规模未标记的数据集中学习视觉功能。然而,它们的主要缺点是,如果简单地旋转或相机的视角更改,这些方法几乎无法识别同一对象的视觉特征。为了克服此限制,同时利用有用的监督来源,我们考虑了视频对象轨道。遵循直觉,轨道中的两个补丁应该在学习的特征空间中具有相似的视觉表示形式,我们采用了一种无监督的基于群集的方法,并约束此类表示为同一类别,因为它们可能属于同一对象或对象零件。与先前的工作相比,不同数据集上两个下游任务的实验结果证明了我们在线深度聚类(ODCT)方法的有效性,而视频轨道一致性(ODCT)方法没有利用时间信息。此外,我们表明,与依靠昂贵和精确的轨道注释相比,利用无监督的类不知所措但嘈杂的轨道生成器的产量提高了准确性。
translated by 谷歌翻译
Pre-training general-purpose visual features with convolutional neural networks without relying on annotations is a challenging and important task. Most recent efforts in unsupervised feature learning have focused on either small or highly curated datasets like ImageNet, whereas using non-curated raw datasets was found to decrease the feature quality when evaluated on a transfer task. Our goal is to bridge the performance gap between unsupervised methods trained on curated data, which are costly to obtain, and massive raw datasets that are easily available. To that effect, we propose a new unsupervised approach which leverages self-supervision and clustering to capture complementary statistics from large-scale data. We validate our approach on 96 million images from YFCC100M [42], achieving state-of-the-art results among unsupervised methods on standard benchmarks, which confirms the potential of unsupervised learning when only non-curated raw data are available. We also show that pre-training a supervised VGG-16 with our method achieves 74.9% top-1 classification accuracy on the validation set of ImageNet, which is an improvement of +0.8% over the same network trained from scratch. Our code is available at https://github. com/facebookresearch/DeeperCluster.
translated by 谷歌翻译
Clustering is a class of unsupervised learning methods that has been extensively applied and studied in computer vision. Little work has been done to adapt it to the end-to-end training of visual features on large scale datasets. In this work, we present DeepCluster, a clustering method that jointly learns the parameters of a neural network and the cluster assignments of the resulting features. DeepCluster iteratively groups the features with a standard clustering algorithm, kmeans, and uses the subsequent assignments as supervision to update the weights of the network. We apply DeepCluster to the unsupervised training of convolutional neural networks on large datasets like ImageNet and YFCC100M. The resulting model outperforms the current state of the art by a significant margin on all the standard benchmarks.
translated by 谷歌翻译
We tackle the problem of novel class discovery and localization (NCDL). In this setting, we assume a source dataset with supervision for only some object classes. Instances of other classes need to be discovered, classified, and localized automatically based on visual similarity without any human supervision. To tackle NCDL, we propose a two-stage object detection network Region-based NCDL (RNCDL) that uses a region proposal network to localize regions of interest (RoIs). We then train our network to learn to classify each RoI, either as one of the known classes, seen in the source dataset, or one of the novel classes, with a long-tail distribution constraint on the class assignments, reflecting the natural frequency of classes in the real world. By training our detection network with this objective in an end-to-end manner, it learns to classify all region proposals for a large variety of classes, including those not part of the labeled object class vocabulary. Our experiments conducted using COCO and LVIS datasets reveal that our method is significantly more effective than multi-stage pipelines that rely on traditional clustering algorithms. Furthermore, we demonstrate the generality of our approach by applying our method to a large-scale Visual Genome dataset, where our network successfully learns to detect various semantic classes without direct supervision.
translated by 谷歌翻译
Large-scale labeled data are generally required to train deep neural networks in order to obtain better performance in visual feature learning from images or videos for computer vision applications. To avoid extensive cost of collecting and annotating large-scale datasets, as a subset of unsupervised learning methods, self-supervised learning methods are proposed to learn general image and video features from large-scale unlabeled data without using any human-annotated labels. This paper provides an extensive review of deep learning-based self-supervised general visual feature learning methods from images or videos. First, the motivation, general pipeline, and terminologies of this field are described. Then the common deep neural network architectures that used for self-supervised learning are summarized. Next, the schema and evaluation metrics of self-supervised learning methods are reviewed followed by the commonly used image and video datasets and the existing self-supervised visual feature learning methods. Finally, quantitative performance comparisons of the reviewed methods on benchmark datasets are summarized and discussed for both image and video feature learning. At last, this paper is concluded and lists a set of promising future directions for self-supervised visual feature learning.
translated by 谷歌翻译
我们在没有监督的情况下解决了学习对象探测器的问题。与弱监督的对象检测不同,我们不假设图像级类标签。取而代之的是,我们使用音频组件来“教”对象检测器,从视听数据中提取监督信号。尽管此问题与声音源本地化有关,但它更难,因为检测器必须按类型对对象进行分类,列举对象的每个实例,并且即使对象保持沉默,也可以这样做。我们通过首先设计一个自制的框架来解决这个问题,该框架具有一个对比目标,该目标共同学会了分类和本地化对象。然后,在不使用任何监督的情况下,我们只需使用这些自我监督的标签和盒子来训练基于图像的对象检测器。因此,对于对象检测和声音源定位的任务,我们优于先前的无监督和弱监督的检测器。我们还表明,我们可以将该探测器与每个伪级标签的标签保持一致,并展示我们的方法如何学习检测超出仪器(例如飞机和猫)的通用对象。
translated by 谷歌翻译
Deep learning has attained remarkable success in many 3D visual recognition tasks, including shape classification, object detection, and semantic segmentation. However, many of these results rely on manually collecting densely annotated real-world 3D data, which is highly time-consuming and expensive to obtain, limiting the scalability of 3D recognition tasks. Thus, we study unsupervised 3D recognition and propose a Self-supervised-Self-Labeled 3D Recognition (SL3D) framework. SL3D simultaneously solves two coupled objectives, i.e., clustering and learning feature representation to generate pseudo-labeled data for unsupervised 3D recognition. SL3D is a generic framework and can be applied to solve different 3D recognition tasks, including classification, object detection, and semantic segmentation. Extensive experiments demonstrate its effectiveness. Code is available at https://github.com/fcendra/sl3d.
translated by 谷歌翻译
弱监督的视频对象本地化(WSVOL)允许仅使用全局视频标签(例如对象类)在视频中找到对象。最先进的方法依赖于多个独立阶段,其中最初的时空建议是使用视觉和运动提示生成的,然后确定和完善了突出的对象。本地化是通过在一个或多个视频上解决优化问题来完成的,并且视频标签通常用于视频集群。这需要每件型号或每类制造代价高昂的推理。此外,由于无监督的运动方法(如光流)或视频标签是从优化中丢弃的,因此本地化区域不是必需的判别。在本文中,我们利用成功的类激活映射(CAM)方法,该方法是基于静止图像而设计的。引入了一种新的时间凸轮(TCAM)方法,以训练一种判别深度学习(DL)模型,以使用称为CAM-Temporal Max Max Pooling(CAM-TMP)的聚集机制在视频中利用时空信息,而不是连续的凸轮。特别是,感兴趣区域的激活(ROI)是从审计的CNN分类器生成的CAM中收集的,以构建Pseudo-Labels构建用于训练DL模型的伪标记。此外,使用全局无监督的尺寸约束和诸如CRF之类的局部约束来产生更准确的凸轮。对单个独立帧的推断允许并行处理框架片段和实时定位。在两个挑战性的YouTube-Objects数据集上进行无限制视频的广泛实验,表明CAM方法(在独立框架上训练)可以产生不错的定位精度。我们提出的TCAM方法在WSVOL准确性方面达到了新的艺术品,并且视觉结果表明它可以适用于后续任务,例如视觉对象跟踪和检测。代码公开可用。
translated by 谷歌翻译
在本文中,我们介绍了Siammask,这是一个实时使用相同简单方法实时执行视觉对象跟踪和视频对象分割的框架。我们通过通过二进制细分任务来增强其损失,从而改善了流行的全面暹罗方法的离线培训程序。离线训练完成后,SiamMask只需要一个单个边界框来初始化,并且可以同时在高框架速率下进行视觉对象跟踪和分割。此外,我们表明可以通过简单地以级联的方式重新使用多任务模型来扩展框架以处理多个对象跟踪和细分。实验结果表明,我们的方法具有较高的处理效率,每秒约55帧。它可以在视觉对象跟踪基准测试中产生实时最新结果,同时以高速进行视频对象分割基准测试以高速显示竞争性能。
translated by 谷歌翻译
Combining clustering and representation learning is one of the most promising approaches for unsupervised learning of deep neural networks. However, doing so naively leads to ill posed learning problems with degenerate solutions. In this paper, we propose a novel and principled learning formulation that addresses these issues. The method is obtained by maximizing the information between labels and input data indices. We show that this criterion extends standard crossentropy minimization to an optimal transport problem, which we solve efficiently for millions of input images and thousands of labels using a fast variant of the Sinkhorn-Knopp algorithm. The resulting method is able to self-label visual data so as to train highly competitive image representations without manual labels. Our method achieves state of the art representation learning performance for AlexNet and ResNet-50 on SVHN, CIFAR-10, CIFAR-100 and ImageNet and yields the first self-supervised AlexNet that outperforms the supervised Pascal VOC detection baseline. Code and models are available 1 .
translated by 谷歌翻译
在本文中,我们表明,自我监督的功能学习的最新进展使无监督的对象发现和语义细分,其性能与10年前的监督语义分割相匹配。我们提出了一种基于无监督的显着性掩码和自我监督的特征聚类的方法,以启动对象发现,然后在伪标签上训练语义分割网络,以在带有多个对象的图像上引导系统。我们介绍了Pascal VOC的结果,该结果远远超出了当前的最新状态(47.3 MIOU),我们首次在整个81个类别中向COCO上首次报告结果:我们的方法发现了34个类别,价格超过20美元\%$ iou,同时获得所有81个类别的平均值为19.6。
translated by 谷歌翻译
In this paper we study the problem of image representation learning without human annotation. By following the principles of selfsupervision, we build a convolutional neural network (CNN) that can be trained to solve Jigsaw puzzles as a pretext task, which requires no manual labeling, and then later repurposed to solve object classification and detection. To maintain the compatibility across tasks we introduce the context-free network (CFN), a siamese-ennead CNN. The CFN takes image tiles as input and explicitly limits the receptive field (or context) of its early processing units to one tile at a time. We show that the CFN includes fewer parameters than AlexNet while preserving the same semantic learning capabilities. By training the CFN to solve Jigsaw puzzles, we learn both a feature mapping of object parts as well as their correct spatial arrangement. Our experimental evaluations show that the learned features capture semantically relevant content. Our proposed method for learning visual representations outperforms state of the art methods in several transfer learning benchmarks.
translated by 谷歌翻译
现代自我监督的学习算法通常强制执行跨视图实例的表示的持久性。虽然非常有效地学习整体图像和视频表示,但这种方法成为在视频中学习时空时间细粒度的特征的子最优,其中场景和情况通过空间和时间演变。在本文中,我们介绍了上下文化的时空对比学习(Const-CL)框架,以利用自我监督有效学习时空时间细粒度的表示。我们首先设计一种基于区域的自我监督的借口任务,该任务要求模型从一个视图中学习将实例表示转换为上下文特征的另一个视图。此外,我们介绍了一个简单的网络设计,有效地调和了整体和本地表示的同时学习过程。我们评估我们对各种下游任务和CONST-CL的学习表现,实现了四个数据集的最先进结果。对于时空行动本地化,Const-CL可以使用AVA-Kinetics验证集的检测到框实现39.4%的地图和30.5%地图。对于对象跟踪,Const-CL在OTB2015上实现了78.1%的精度和55.2%的成功分数。此外,Const-CL分别在视频动作识别数据集,UCF101和HMDB51上实现了94.8%和71.9%的前1个微调精度。我们计划向公众发布我们的代码和模型。
translated by 谷歌翻译
Unsupervised object discovery aims to localize objects in images, while removing the dependence on annotations required by most deep learning-based methods. To address this problem, we propose a fully unsupervised, bottom-up approach, for multiple objects discovery. The proposed approach is a two-stage framework. First, instances of object parts are segmented by using the intra-image similarity between self-supervised local features. The second step merges and filters the object parts to form complete object instances. The latter is performed by two CNN models that capture semantic information on objects from the entire dataset. We demonstrate that the pseudo-labels generated by our method provide a better precision-recall trade-off than existing single and multiple objects discovery methods. In particular, we provide state-of-the-art results for both unsupervised class-agnostic object detection and unsupervised image segmentation.
translated by 谷歌翻译
通过对比学习,自我监督学习最近在视觉任务中显示了巨大的潜力,这旨在在数据集中区分每个图像或实例。然而,这种情况级别学习忽略了实例之间的语义关系,有时不希望地从语义上类似的样本中排斥锚,被称为“假否定”。在这项工作中,我们表明,对于具有更多语义概念的大规模数据集来说,虚假否定的不利影响更为重要。为了解决这个问题,我们提出了一种新颖的自我监督的对比学习框架,逐步地检测并明确地去除假阴性样本。具体地,在训练过程之后,考虑到编码器逐渐提高,嵌入空间变得更加语义结构,我们的方法动态地检测增加的高质量假否定。接下来,我们讨论两种策略,以明确地在对比学习期间明确地消除检测到的假阴性。广泛的实验表明,我们的框架在有限的资源设置中的多个基准上表现出其他自我监督的对比学习方法。
translated by 谷歌翻译
我们为视频对象分割(VOS)提出了一种对无监督学习的新方法。与以前的工作不同,我们的配方允许直接在完全卷积的制度中学习密集特征表示。我们依靠统一的网格采样来提取一组锚点并培训我们的模型,以消除它们之间的间间和视频间级别之间的消除。然而,训练这种模型的天真的方案导致退化的解决方案。我们建议使用简单的正则化方案来防止这种情况,将分段任务的标准性属性与相似性转换的平衡性。我们的培训目标承认有效实施并展示快速培训趋同。在已建立的VOS基准测试中,我们的方法尽管使用明显更少的培训数据和计算能力,但我们的方法超出了以前的工作的分割准确性。
translated by 谷歌翻译
Jitendra Malik once said, "Supervision is the opium of the AI researcher". Most deep learning techniques heavily rely on extreme amounts of human labels to work effectively. In today's world, the rate of data creation greatly surpasses the rate of data annotation. Full reliance on human annotations is just a temporary means to solve current closed problems in AI. In reality, only a tiny fraction of data is annotated. Annotation Efficient Learning (AEL) is a study of algorithms to train models effectively with fewer annotations. To thrive in AEL environments, we need deep learning techniques that rely less on manual annotations (e.g., image, bounding-box, and per-pixel labels), but learn useful information from unlabeled data. In this thesis, we explore five different techniques for handling AEL.
translated by 谷歌翻译
水果和蔬菜的检测,分割和跟踪是精确农业的三个基本任务,实现了机器人的收获和产量估计。但是,现代算法是饥饿的数据,并非总是有可能收集足够的数据来运用最佳性能的监督方法。由于数据收集是一项昂贵且繁琐的任务,因此在农业中使用计算机视觉的能力通常是小企业无法实现的。在此背景下的先前工作之后,我们提出了一种初始弱监督的解决方案,以减少在精确农业应用程序中获得最新检测和细分所需的数据,在这里,我们在这里改进该系统并探索跟踪果实的问题果园。我们介绍了拉齐奥南部(意大利)葡萄的葡萄园案例,因为葡萄由于遮挡,颜色和一般照明条件而难以分割。当有一些可以用作源数据的初始标记数据(例如,葡萄酒葡萄数据)时,我们会考虑这种情况,但与目标数据有很大不同(例如表格葡萄数据)。为了改善目标数据的检测和分割,我们建议使用弱边界框标签训练分割算法,而对于跟踪,我们从运动算法中利用3D结构来生成来自已标记样品的新标签。最后,将两个系统组合成完整的半监督方法。与SOTA监督解决方案的比较表明,我们的方法如何能够训练以很少的标记图像和非常简单的标签来实现高性能的新型号。
translated by 谷歌翻译
由于存在对象的自然时间转换,视频是一种具有自我监督学习(SSL)的丰富来源。然而,目前的方法通常是随机采样用于学习的视频剪辑,这导致监督信号差。在这项工作中,我们提出了预先使用无监督跟踪信号的SSL框架,用于选择包含相同对象的剪辑,这有助于更好地利用对象的时间变换。预先使用跟踪信号在空间上限制帧区域以学习并通过在Grad-CAM注意图上提供监督来定位模型以定位有意义的物体。为了评估我们的方法,我们在VGG-Sound和Kinetics-400数据集上培训势头对比(MOCO)编码器,预先使用预先。使用Previts的培训优于Moco在图像识别和视频分类下游任务中独自学习的表示,从而获得了行动分类的最先进的性能。预先帮助学习更强大的功能表示,以便在背景和视频数据集上进行背景和上下文更改。从大规模未婚视频中学习具有预算的大规模未能视频可能会导致更准确和强大的视觉功能表示。
translated by 谷歌翻译
标记数据通常昂贵且耗时,特别是对于诸如对象检测和实例分割之类的任务,这需要对图像的密集标签进行密集的标签。虽然几张拍摄对象检测是关于培训小说中的模型(看不见的)对象类具有很少的数据,但它仍然需要在许多标记的基础(见)类的课程上进行训练。另一方面,自我监督的方法旨在从未标记数据学习的学习表示,该数据转移到诸如物体检测的下游任务。结合几次射击和自我监督的物体检测是一个有前途的研究方向。在本调查中,我们审查并表征了几次射击和自我监督对象检测的最新方法。然后,我们给我们的主要外卖,并讨论未来的研究方向。https://gabrielhuang.github.io/fsod-survey/的项目页面
translated by 谷歌翻译