We present Fast Language-Image Pre-training (FLIP), a simple and more efficient method for training CLIP. Our method randomly masks out and removes a large portion of image patches during training. Masking allows us to learn from more image-text pairs given the same wall-clock time and contrast more samples per iteration with similar memory footprint. It leads to a favorable trade-off between accuracy and training time. In our experiments on 400 million image-text pairs, FLIP improves both accuracy and speed over the no-masking baseline. On a large diversity of downstream tasks, FLIP dominantly outperforms the CLIP counterparts trained on the same data. Facilitated by the speedup, we explore the scaling behavior of increasing the model size, data size, or training length, and report encouraging results and comparisons. We hope that our work will foster future research on scaling vision-language learning.
translated by 谷歌翻译
我们探索普通的非层次视觉变压器(VIT)作为用于对象检测的骨干网络。该设计使原始的VIT体系结构可以进行微调以进行对象检测,而无需重新设计层次结构的主链以进行预训练。随着微调的最低适应性,我们的纯净背骨检测器可以取得竞争成果。令人惊讶的是,我们观察到:(i)足以从单尺度特征映射(没有常见的FPN设计)构建一个简单的特征金字塔,并且(ii)足以使用窗户注意力(无需转移),很少有帮助跨窗口传播块。凭借普通的VIT骨架作为掩盖自动编码器(MAE),我们的探测器(名为VITDET)可以与先前基于层次结构骨架的先前领先方法竞争,仅使用ImagEnet-1k Pre Pre pre to Coco Dataset上的61.3 ap_box竞争-训练。我们希望我们的研究能够引起人们对普通背骨检测器的研究。 VITDET的代码可在detectron2中获得。
translated by 谷歌翻译
在本文中,我们介绍了VCSL(视频复制段本地化),这是一种新的综合段级注释的视频复制数据集。与受视频级注释或小规模限制的现有复制检测数据集相比,VCSL不仅具有两个段级标签的数据级,其中有160k现实的视频副本对,其中包含超过280k的本地化copied seggment对,而且还包含超过280k涵盖各种视频类别和各种视频持续时间。每个收集的视频对中的所有复制段均经过手动提取,并伴随着精确注释的启动和结束时间戳。除了数据集外,我们还提出了一种新颖的评估协议,该协议可以更好地衡量视频对之间复制重叠段的预测准确性,并在不同情况下显示出改善的适应性。通过使用拟议的数据集和评估指标对几个基线和最先进的细分级视频副本检测方法进行基准测试,我们提供了一项全面的分析,可以揭示当前方法的优势和劣势作品。 VCSL数据集,公制和基准代码均在https://github.com/alipay/vcsl上公开获得。
translated by 谷歌翻译
对象检测是用于测试预先训练的网络参数的中央下游任务是否达到益处,例如提高准确度或训练速度。当新架构(如视觉变压器(VIT)模型到达时,物体检测方法的复杂性可以使该基准是非微不足道的。这些困难(例如,架构不相容,慢训练,高记忆消耗,未知的培训公式等)已经阻止了最近通过标准VIT模型进行了基准测试转移学习的研究。在本文中,我们提出了克服这些挑战的培训技术,使得使用标准的VT模型作为面膜R-CNN的骨干。这些工具促进了我们研究的主要目标:我们比较五种Vit初始化,包括最近的最先进的自我监督的学习方法,监督初始化和强大的随机初始化基线。我们的研究结果表明,最近基于掩蔽的无监督学习方法可能是在COCO的令人信服的转移学习改进,将箱子AP增加到4%(绝对)的监督和先前自我监督的预训练方法。此外,基于掩蔽的初始化比例更好,随着模型尺寸的增加而增长的提高。
translated by 谷歌翻译
本文显示屏蔽的自动化器(MAE)是可扩展的自我监督学习者,用于计算机愿景。我们的MAE方法很简单:我们掩盖输入图像的随机补丁并重建缺失像素。它基于两个核心设计。首先,我们开发一个不对称的编码器解码器架构,其中编码器仅在掩码的可见子集(没有掩码令牌)上,以及重量解码器,该重量解码器从潜像和掩码令牌重建原始图像。其次,我们发现掩蔽了高比例的输入图像,例如,75%,产生非凡和有意义的自我监督任务。耦合这两种设计使我们能够有效且有效地培训大型模型:我们加速培训(3倍或更多)并提高准确性。我们可扩展的方法允许学习概括的高容量模型:例如,Vanilla Vit-Maxim模型在使用Imagenet-1K数据的方法中实现最佳准确性(87.8%)。下游任务中的转移性能优于监督预培训并显示有前途的缩放行为。
translated by 谷歌翻译
This paper does not describe a novel method. Instead, it studies a straightforward, incremental, yet must-know baseline given the recent progress in computer vision: selfsupervised learning for Vision Transformers (ViT). While the training recipes for standard convolutional networks have been highly mature and robust, the recipes for ViT are yet to be built, especially in the self-supervised scenarios where training becomes more challenging. In this work, we go back to basics and investigate the effects of several fundamental components for training self-supervised ViT. We observe that instability is a major issue that degrades accuracy, and it can be hidden by apparently good results. We reveal that these results are indeed partial failure, and they can be improved when training is made more stable. We benchmark ViT results in MoCo v3 and several other selfsupervised frameworks, with ablations in various aspects. We discuss the currently positive evidence as well as challenges and open questions. We hope that this work will provide useful data points and experience for future research.
translated by 谷歌翻译
Siamese networks have become a common structure in various recent models for unsupervised visual representation learning. These models maximize the similarity between two augmentations of one image, subject to certain conditions for avoiding collapsing solutions. In this paper, we report surprising empirical results that simple Siamese networks can learn meaningful representations even using none of the following: (i) negative sample pairs, (ii) large batches, (iii) momentum encoders. Our experiments show that collapsing solutions do exist for the loss and structure, but a stop-gradient operation plays an essential role in preventing collapsing. We provide a hypothesis on the implication of stop-gradient, and further show proof-of-concept experiments verifying it. Our "SimSiam" method achieves competitive results on ImageNet and downstream tasks. We hope this simple baseline will motivate people to rethink the roles of Siamese architectures for unsupervised representation learning. Code will be made available.
translated by 谷歌翻译
In this work, we present a new network design paradigm. Our goal is to help advance the understanding of network design and discover design principles that generalize across settings. Instead of focusing on designing individual network instances, we design network design spaces that parametrize populations of networks. The overall process is analogous to classic manual design of networks, but elevated to the design space level. Using our methodology we explore the structure aspect of network design and arrive at a low-dimensional design space consisting of simple, regular networks that we call RegNet. The core insight of the RegNet parametrization is surprisingly simple: widths and depths of good networks can be explained by a quantized linear function. We analyze the RegNet design space and arrive at interesting findings that do not match the current practice of network design. The RegNet design space provides simple and fast networks that work well across a wide range of flop regimes. Under comparable training settings and flops, the RegNet models outperform the popular Effi-cientNet models while being up to 5× faster on GPUs.
translated by 谷歌翻译
We present a new method for efficient high-quality image segmentation of objects and scenes. By analogizing classical computer graphics methods for efficient rendering with over-and undersampling challenges faced in pixel labeling tasks, we develop a unique perspective of image segmentation as a rendering problem. From this vantage, we present the PointRend (Point-based Rendering) neural network module: a module that performs point-based segmentation predictions at adaptively selected locations based on an iterative subdivision algorithm. PointRend can be flexibly applied to both instance and semantic segmentation tasks by building on top of existing state-ofthe-art models. While many concrete implementations of the general idea are possible, we show that a simple design already achieves excellent results. Qualitatively, PointRend outputs crisp object boundaries in regions that are oversmoothed by previous methods. Quantitatively, PointRend yields significant gains on COCO and Cityscapes, for both instance and semantic segmentation. PointRend's efficiency enables output resolutions that are otherwise impractical in terms of memory or computation compared to existing approaches. Code has been made available at https:// github.com/facebookresearch/detectron2/ tree/master/projects/PointRend.
translated by 谷歌翻译
We present Momentum Contrast (MoCo) for unsupervised visual representation learning. From a perspective on contrastive learning [29] as dictionary look-up, we build a dynamic dictionary with a queue and a moving-averaged encoder. This enables building a large and consistent dictionary on-the-fly that facilitates contrastive unsupervised learning. MoCo provides competitive results under the common linear protocol on ImageNet classification. More importantly, the representations learned by MoCo transfer well to downstream tasks. MoCo can outperform its supervised pre-training counterpart in 7 detection/segmentation tasks on PASCAL VOC, COCO, and other datasets, sometimes surpassing it by large margins. This suggests that the gap between unsupervised and supervised representation learning has been largely closed in many vision tasks.
translated by 谷歌翻译