Vision Transformer(VIT)在图像处理中变得越来越流行。具体而言,我们研究了测试时间适应(TTA)对VIT的有效性,VIT是一种已经出现的技术,可以自行纠正其在测试时间期间的预测。首先,我们在VIT-B16和VIT-L16上基准了各种测试时间适应方法。结果表明,使用适当的损耗函数时,TTA对VIT有效,并且先前的投入(明智地选择调制参数)是不需要的。基于观察结果,我们提出了一种称为类条件特征对齐(CFA)的新的测试时间适应方法,该方法将类别条件分布的差异和在线源中隐藏表示的整个分布差异最小化,在线中的整个分布差异方式。图像分类任务(CIFAR-10-C,CIFAR-100-C和Imagenet-C)和域适应性(Digits DataSet和Imagenet-Sketch)的实验表明,CFA稳定地超过了各种数据集中的现有基础。我们还通过在RESNET,MLP混合和几种VIT变体(Vit-augreg,Deit和Beit)上实验来验证CFA是模型不可知论。使用BEIT主链,CFA在Imagenet-C上达到了19.8%的TOP-1错误率,表现优于现有的测试时间适应基线44.0%。这是不需要改变训练阶段的TTA方法中的最新结果。
translated by 谷歌翻译
We demonstrate that self-learning techniques like entropy minimization and pseudo-labeling are simple and effective at improving performance of a deployed computer vision model under systematic domain shifts. We conduct a wide range of large-scale experiments and show consistent improvements irrespective of the model architecture, the pre-training technique or the type of distribution shift. At the same time, self-learning is simple to use in practice because it does not require knowledge or access to the original training data or scheme, is robust to hyperparameter choices, is straight-forward to implement and requires only a few adaptation epochs. This makes self-learning techniques highly attractive for any practitioner who applies machine learning algorithms in the real world. We present state-of-the-art adaptation results on CIFAR10-C (8.5% error), ImageNet-C (22.0% mCE), ImageNet-R (17.4% error) and ImageNet-A (14.8% error), theoretically study the dynamics of self-supervised adaptation methods and propose a new classification dataset (ImageNet-D) which is challenging even with adaptation.
translated by 谷歌翻译
Models should be able to adapt to unseen data during test-time to avoid performance drops caused by inevitable distribution shifts in real-world deployment scenarios. In this work, we tackle the practical yet challenging test-time adaptation (TTA) problem, where a model adapts to the target domain without accessing the source data. We propose a simple recipe called \textit{Data-efficient Prompt Tuning} (DePT) with two key ingredients. First, DePT plugs visual prompts into the vision Transformer and only tunes these source-initialized prompts during adaptation. We find such parameter-efficient finetuning can efficiently adapt the model representation to the target domain without overfitting to the noise in the learning objective. Second, DePT bootstraps the source representation to the target domain by memory bank-based online pseudo-labeling. A hierarchical self-supervised regularization specially designed for prompts is jointly optimized to alleviate error accumulation during self-training. With much fewer tunable parameters, DePT demonstrates not only state-of-the-art performance on major adaptation benchmarks VisDA-C, ImageNet-C, and DomainNet-126, but also superior data efficiency, i.e., adaptation with only 1\% or 10\% data without much performance degradation compared to 100\% data. In addition, DePT is also versatile to be extended to online or multi-source TTA settings.
translated by 谷歌翻译
域适应对于将学习模型调整到新方案,例如域移位或更改数据分布,这是至关重要的。目前的方法通常需要来自移位域的大量标记或未标记的数据。这可以是在需要连续动态适应或遭受数据稀缺的领域的障碍,例如,自动驾驶在挑战天气条件下。为了解决持续适应分配班的问题,我们提出了动态无监督的适应(DUA)。我们通过持续调整批量归一化层的统计来修改模型的特征表示。我们表明,通过从移位域中仅访问一小部分未标记的数据并按顺序调整,可以实现强大的性能增益。甚至从目标领域的未标记数据的少于1%,Dua已经实现了强大的基线的竞争结果。此外,与先前的方法相比,计算开销最小。我们的方法很简单,但有效,可以应用于任何使用批量归一化作为其组件之一的架构。我们通过在各种域适应数据集和任务中评估DUA的效用,包括对象识别,数字识别和对象检测。
translated by 谷歌翻译
本文提出了一种新颖的测试时间适应策略,该策略仅使用来自目标域的未标记的在线数据来调整在源域上预先训练的模型,以减轻由于源和目标域之间的分布变化而导致的性能降低。使用未标记的在线数据调整整个模型参数可能是有害的,这是由于无监督目标的错误信号。为了减轻此问题,我们提出了一个偏僻的权重正则化,该调整重量正规化鼓励在很大程度上更新模型参数对分布移位敏感的参数,同时在测试时间适应期间稍微更新那些对变化的不敏感的参数。这种正则化使该模型能够通过利用高学习率的好处来快速适应目标域而无需性能降低。此外,我们提出了一个基于最近的源原型来对齐源和目标特征的辅助任务,这有​​助于减少分布转移并导致进一步的性能提高。我们表明,我们的方法在各种标准基准方面展示了最先进的性能,甚至超过其监督的对手。
translated by 谷歌翻译
部署的ML模型的基本要求是从与培训不同的测试分布中汲取的数据概括。解决此问题的一个流行解决方案是,仅使用未标记的数据将预训练的模型调整为新的域。在本文中,我们关注该问题的挑战性变体,其中访问原始源数据受到限制。虽然完全测试时间适应(FTTA)和无监督的域适应性(UDA)密切相关,但由于大多数UDA方法需要访问源数据,因此UDA的进展不容易适用于TTA。因此,我们提出了一种新方法,即Cattan,它通过放松了通过新颖的深层子空间对准策略来放松访问整个源数据的需求,从而弥合了UDA和FTTA。通过为源数据存储的子空间基础设置的最小开销,Cattan在适应过程中可以在源数据和目标数据之间进行无监督的对齐。通过对多个2D和3D Vision基准测试(Imagenet-C,Office-31,OfficeHome,Domainnet,PointDa-10)和模型体系结构进行广泛的实验评估,我们在FTTA性能方面表现出显着提高。此外,即使使用固有健壮的模型,预训练的VIT表示以及目标域中的样本可用性低,我们也会对对齐目标的实用性做出许多关键发现。
translated by 谷歌翻译
在测试时间适应(TTA)中,给定在某些源数据上培训的模型,目标是使其适应从不同分布的测试实例更好地预测。至关重要的是,TTA假设从目标分布到Finetune源模型,无法访问源数据或甚至从目标分布到任何其他标记/未标记的样本。在这项工作中,我们考虑TTA在更务实的设置中,我们称为SITA(单图像测试时间适应)。这里,在制作每个预测时,该模型只能访问给定的\ emph {单}测试实例,而不是实例的\ emph {批次}。通常在文献中被考虑。这是由逼真的情况激励,其中在按需时尚中需要推断,可能不会被延迟到“批量 - iFY”传入请求或者在没有范围的边缘设备(如移动电话中)发生推断批处理。 SITA的整个适应过程应在推理时间发生时非常快。为了解决这个问题,我们提出了一种新颖的AUGBN,用于仅需要转发传播的SITA设置。该方法可以为分类和分段任务的单个测试实例调整任何特征训练模型。 AUGBN估计仅使用具有标签保存的转换的一个前进通过的给定测试图像的看不见的测试分布的正常化统计。由于AUGBN不涉及任何反向传播,与其他最近的方法相比,它显着更快。据我们所知,这是仅使用单个测试图像解决此硬调整问题的第一个工作。尽管非常简单,但我们的框架能够在我们广泛的实验和消融研究中对目标实例上应用源模型来实现显着的性能增益。
translated by 谷歌翻译
While the Transformer architecture has become the de-facto standard for natural language processing tasks, its applications to computer vision remain limited. In vision, attention is either applied in conjunction with convolutional networks, or used to replace certain components of convolutional networks while keeping their overall structure in place. We show that this reliance on CNNs is not necessary and a pure transformer applied directly to sequences of image patches can perform very well on image classification tasks. When pre-trained on large amounts of data and transferred to multiple mid-sized or small image recognition benchmarks (ImageNet, CIFAR-100, VTAB, etc.), Vision Transformer (ViT) attains excellent results compared to state-of-the-art convolutional networks while requiring substantially fewer computational resources to train. 1
translated by 谷歌翻译
分批归一化(BN)是一种无处不在的技术,用于训练深层神经网络,可加速其收敛以达到更高的准确性。但是,我们证明了BN具有根本的缺点:它激励该模型依赖于训练(内域)数据高度特定的低变义特征,从而损害了室外示例的概括性能。在这项工作中,我们首先表明在各种架构上删除BN层会导致较低的域外和腐败错误,而造成较高的内域错误,因此我们首先研究了这种现象。然后,我们提出了反平衡老师(CT),该方法利用与老师的老师一起利用同一模型的冷冻副本,通过通过一致性损失功能实质上调整其权重来实现学生网络对强大表示的学习。该正则化信号有助于CT在不可预见的数据变化中表现良好,即使没有从目标域中的信息如先前的工作中。从理论上讲,我们在过度参数化的线性回归设置中显示了为什么归一化导致模型对这种内域特征的依赖,并通过验证CT的功效来证明CT的功效,从而在稳健性基准(例如CIFAR-10-C,CIFAR-10-C,CIFAR-100-C,CIFAR-100-C,CIFAR-100-C,CIFAR-100-C,CIFAR-100-C,CIFAR-100-C,CIFAR-100-C,CIFAR-100)上表现出了疗效。和VLCS。
translated by 谷歌翻译
域泛化(DG)是一个难度的学习问题,旨在学习一个概念域的概念模型。最近的巨型预训练模型,如剪辑和GPT-3,即基础模型(FMS),已被证明对许多分布换档具有强大,因此应导致DG的大量改进。在这项工作中,我们研究了在图像分类中采用DG问题采用剪辑的通用方法,在那里我们评估了天真零射击学习和全DG学习设置。对于后者,我们提出了AP(摊销提示),作为迅速生成形式的域推断的新方法。在域泛化基准上使用多个标准数据集,即PACS,VLC,OfficeHome和Terraincognita,Clip提供了可比的性能而无需微调任何参数,这表明FM在DG中的适用性和重要性。此外,我们表明,组合域提示跟踪带剪辑使AP能够以大的余量越大,从71.3 \%升高到79.3 \%的精度。我们希望我们的方法的简单性和成功强调强调的重要性并导致更广泛采用和分析域泛化领域的基础模型。
translated by 谷歌翻译
Although action recognition systems can achieve top performance when evaluated on in-distribution test points, they are vulnerable to unanticipated distribution shifts in test data. However, test-time adaptation of video action recognition models against common distribution shifts has so far not been demonstrated. We propose to address this problem with an approach tailored to spatio-temporal models that is capable of adaptation on a single video sample at a step. It consists in a feature distribution alignment technique that aligns online estimates of test set statistics towards the training statistics. We further enforce prediction consistency over temporally augmented views of the same test video sample. Evaluations on three benchmark action recognition datasets show that our proposed technique is architecture-agnostic and able to significantly boost the performance on both, the state of the art convolutional architecture TANet and the Video Swin Transformer. Our proposed method demonstrates a substantial performance gain over existing test-time adaptation approaches in both evaluations of a single distribution shift and the challenging case of random distribution shifts. Code will be available at \url{https://github.com/wlin-at/ViTTA}.
translated by 谷歌翻译
Test-time adaptation is the problem of adapting a source pre-trained model using test inputs from a target domain without access to source domain data. Most of the existing approaches address the setting in which the target domain is stationary. Moreover, these approaches are prone to making erroneous predictions with unreliable uncertainty estimates when distribution shifts occur. Hence, test-time adaptation in the face of non-stationary target domain shift becomes a problem of significant interest. To address these issues, we propose a principled approach, PETAL (Probabilistic lifElong Test-time Adaptation with seLf-training prior), which looks into this problem from a probabilistic perspective using a partly data-dependent prior. A student-teacher framework, where the teacher model is an exponential moving average of the student model naturally emerges from this probabilistic perspective. In addition, the knowledge from the posterior distribution obtained for the source task acts as a regularizer. To handle catastrophic forgetting in the long term, we also propose a data-driven model parameter resetting mechanism based on the Fisher information matrix (FIM). Moreover, improvements in experimental results suggest that FIM based data-driven parameter restoration contributes to reducing the error accumulation and maintaining the knowledge of recent domain by restoring only the irrelevant parameters. In terms of predictive error rate as well as uncertainty based metrics such as Brier score and negative log-likelihood, our method achieves better results than the current state-of-the-art for online lifelong test time adaptation across various benchmarks, such as CIFAR-10C, CIFAR-100C, ImageNetC, and ImageNet3DCC datasets.
translated by 谷歌翻译
测试时间的域变化在实践中是不可避免的。测试时间适应性通过在部署过程中调整模型来解决此问题。从理论上讲,最近的工作表明,自我训练可能是逐渐域移动的强大方法。在这项工作中,我们显示了渐进域适应与测试时间适应之间的自然联系。我们发布了一个名为Carlatta的新合成数据集,该数据集允许在测试时间期间探索渐进的域移动,并评估无监督域适应和测试时间适应的几种方法。我们提出了一种基于自我训练和样式转移的新方法GTTA。GTTA明确利用渐进域移动并在该区域设置新标准。我们进一步证明了我们的方法对连续和逐渐的CIFAR10C,CIFAR100C和Imagenet-C基准的有效性。
translated by 谷歌翻译
Unsupervised domain adaptation (UDA) aims to leverage the knowledge learned from a labeled source dataset to solve similar tasks in a new unlabeled domain. Prior UDA methods typically require to access the source data when learning to adapt the model, making them risky and inefficient for decentralized private data. This work tackles a practical setting where only a trained source model is available and investigates how we can effectively utilize such a model without source data to solve UDA problems. We propose a simple yet generic representation learning framework, named Source HypOthesis Transfer (SHOT). SHOT freezes the classifier module (hypothesis) of the source model and learns the target-specific feature extraction module by exploiting both information maximization and selfsupervised pseudo-labeling to implicitly align representations from the target domains to the source hypothesis. To verify its versatility, we evaluate SHOT in a variety of adaptation cases including closed-set, partial-set, and open-set domain adaptation. Experiments indicate that SHOT yields state-of-the-art results among multiple domain adaptation benchmarks.
translated by 谷歌翻译
In this paper, we propose Test-Time Training, a general approach for improving the performance of predictive models when training and test data come from different distributions. We turn a single unlabeled test sample into a self-supervised learning problem, on which we update the model parameters before making a prediction. This also extends naturally to data in an online stream. Our simple approach leads to improvements on diverse image classification benchmarks aimed at evaluating robustness to distribution shifts.
translated by 谷歌翻译
最近,自我监督的蒙面自动编码器(MAE)因其令人印象深刻的表示能力而引起了前所未有的关注。但是,借口任务是掩盖的图像建模(MIM),重建缺失的本地贴片,缺乏对图像的全局理解。本文通过添加有监督的分类部门将MAE扩展到了完全监督的环境,从而使Mae可以从Golden Labels中有效地学习全球功能。所提出的监督MAE(Supmae)仅利用图像贴片的可见子集进行分类,这与使用所有图像贴片的标准监督预训练不同。通过实验,我们证明了Supmae不仅更有效地训练,而且还学会了更健壮和可转移的功能。具体而言,Supmae在使用VIT-B/16模型的ImageNet上评估时仅使用30%的计算来实现MAE的可比性。 Supmae对ImageNet变体的鲁棒性和转移学习绩效优于MAE和标准监督前培训对手。代码将公开可用。
translated by 谷歌翻译
由多种自我关注层组成的变压器,对适用于不同数据方式的通用学习原语,包括计算机视觉最新(SOTA)标准准确性的近期突破。什么仍然很大程度上未开发,是他们的稳健性评估和归因。在这项工作中,我们研究了视觉变压器(VIT)对共同腐败和扰动,分布换算和自然对抗例的鲁棒性。我们使用六种不同的多样化想象数据集关于强大的分类,进行vit模型和Sota卷积神经网络(CNNS)的全面性能比较,大转移。通过一系列系统地设计的实验,我们提供了分析,这些分析提供了定量和定性迹象,以解释为什么VITS确实更强大的学习者。例如,对于更少的参数和类似的数据集和预训练组合,VIT在ImageNet-A上给出了28.10%的前1个精度,这是比一位的可比较变体高4.3x。我们对图像掩蔽,傅里叶谱灵敏度和传播的分析,在离散余弦能量谱上揭示了Vit归属于改善鲁棒性的损伤性能。再现我们的实验的代码可在https://git.io/j3vo0上获得。
translated by 谷歌翻译
测试时间适应(TTA)是指适应神经网络以进行分配变化,仅在测试时间内从新域中访问未标记的测试样本。先前的TTA方法优化了无监督的目标,例如帐篷中的模型预测的熵[Wang等,2021],但目前尚不清楚到底是什么使TTA损失良好。在本文中,我们首先提出一个令人惊讶的现象:如果我们尝试在广泛的功能上衡量最佳的TTA损失,那么我们恢复了与(温度缩放版本的)非常相似的函数帐篷采用的软磁性 - 凝集。但是,只有在我们正在适应的分类器通过跨凝结训练的情况下,这才能保持;如果通过平方损失训练,则会出现不同的最佳TTA损失。为了解释这一现象,我们通过训练损失的凸结合物分析了TTA。我们表明,在自然条件下,这种(无监督的)共轭功能可以看作是对原始监督损失的局部近似值,实际上,它恢复了元学习发现的最佳损失。这导致了一种通用食谱,可用于为通用类的任何给定监督培训损失功能找到良好的TTA损失。从经验上讲,我们的方法始终在广泛的基准测试中统治其他基线。当应用于新型损失功能的分类器时,我们的方法尤其令人感兴趣,例如,最近所传播的polyloss与基于熵的损失有很大的不同。此外,我们表明我们的方法也可以用非常特定的软标签解释为一种自我训练,我们将其称为共轭伪标记。总体而言,我们的方法为更好地理解和改善测试时间适应提供了广泛的框架。代码可在https://github.com/locuslab/tta_conjugate上找到。
translated by 谷歌翻译
Recent studies show that Vision Transformers(ViTs) exhibit strong robustness against various corruptions. Although this property is partly attributed to the self-attention mechanism, there is still a lack of systematic understanding. In this paper, we examine the role of self-attention in learning robust representations. Our study is motivated by the intriguing properties of the emerging visual grouping in Vision Transformers, which indicates that self-attention may promote robustness through improved mid-level representations. We further propose a family of fully attentional networks (FANs) that strengthen this capability by incorporating an attentional channel processing design. We validate the design comprehensively on various hierarchical backbones. Our model achieves a state-of-the-art 87.1% accuracy and 35.8% mCE on ImageNet-1k and ImageNet-C with 76.8M parameters. We also demonstrate state-of-the-art accuracy and robustness in two downstream tasks: semantic segmentation and object detection. Code is available at: https://github.com/NVlabs/FAN.
translated by 谷歌翻译
大多数机器学习算法的基本假设是培训和测试数据是从相同的底层分布中汲取的。然而,在几乎所有实际应用中违反了这种假设:由于不断变化的时间相关,非典型最终用户或其他因素,机器学习系统经常测试。在这项工作中,我们考虑域泛化的问题设置,其中训练数据被构造成域,并且可能有多个测试时间偏移,对应于新域或域分布。大多数事先方法旨在学习在所有域上执行良好的单一强大模型或不变的功能空间。相比之下,我们的目标是使用未标记的测试点学习适应域转移到域移的模型。我们的主要贡献是介绍自适应风险最小化(ARM)的框架,其中模型被直接优化,以便通过学习来转移以适应培训域来改编。与稳健性,不变性和适应性的先前方法相比,ARM方法提供了在表现域移位的多个图像分类问题上的性能增益为1-4%的测试精度。
translated by 谷歌翻译