Test-time adaptation is the problem of adapting a source pre-trained model using test inputs from a target domain without access to source domain data. Most of the existing approaches address the setting in which the target domain is stationary. Moreover, these approaches are prone to making erroneous predictions with unreliable uncertainty estimates when distribution shifts occur. Hence, test-time adaptation in the face of non-stationary target domain shift becomes a problem of significant interest. To address these issues, we propose a principled approach, PETAL (Probabilistic lifElong Test-time Adaptation with seLf-training prior), which looks into this problem from a probabilistic perspective using a partly data-dependent prior. A student-teacher framework, where the teacher model is an exponential moving average of the student model naturally emerges from this probabilistic perspective. In addition, the knowledge from the posterior distribution obtained for the source task acts as a regularizer. To handle catastrophic forgetting in the long term, we also propose a data-driven model parameter resetting mechanism based on the Fisher information matrix (FIM). Moreover, improvements in experimental results suggest that FIM based data-driven parameter restoration contributes to reducing the error accumulation and maintaining the knowledge of recent domain by restoring only the irrelevant parameters. In terms of predictive error rate as well as uncertainty based metrics such as Brier score and negative log-likelihood, our method achieves better results than the current state-of-the-art for online lifelong test time adaptation across various benchmarks, such as CIFAR-10C, CIFAR-100C, ImageNetC, and ImageNet3DCC datasets.
translated by 谷歌翻译
We demonstrate that self-learning techniques like entropy minimization and pseudo-labeling are simple and effective at improving performance of a deployed computer vision model under systematic domain shifts. We conduct a wide range of large-scale experiments and show consistent improvements irrespective of the model architecture, the pre-training technique or the type of distribution shift. At the same time, self-learning is simple to use in practice because it does not require knowledge or access to the original training data or scheme, is robust to hyperparameter choices, is straight-forward to implement and requires only a few adaptation epochs. This makes self-learning techniques highly attractive for any practitioner who applies machine learning algorithms in the real world. We present state-of-the-art adaptation results on CIFAR10-C (8.5% error), ImageNet-C (22.0% mCE), ImageNet-R (17.4% error) and ImageNet-A (14.8% error), theoretically study the dynamics of self-supervised adaptation methods and propose a new classification dataset (ImageNet-D) which is challenging even with adaptation.
translated by 谷歌翻译
Continual Test-Time Adaptation (CTTA) aims to adapt the source model to continually changing unlabeled target domains without access to the source data. Existing methods mainly focus on model-based adaptation in a self-training manner, such as predicting pseudo labels for new domain datasets. Since pseudo labels are noisy and unreliable, these methods suffer from catastrophic forgetting and error accumulation when dealing with dynamic data distributions. Motivated by the prompt learning in NLP, in this paper, we propose to learn an image-level visual domain prompt for target domains while having the source model parameters frozen. During testing, the changing target datasets can be adapted to the source model by reformulating the input data with the learned visual prompts. Specifically, we devise two types of prompts, i.e., domains-specific prompts and domains-agnostic prompts, to extract current domain knowledge and maintain the domain-shared knowledge in the continual adaptation. Furthermore, we design a homeostasis-based prompt adaptation strategy to suppress domain-sensitive parameters in domain-invariant prompts to learn domain-shared knowledge more effectively. This transition from the model-dependent paradigm to the model-free one enables us to bypass the catastrophic forgetting and error accumulation problems. Experiments show that our proposed method achieves significant performance gains over state-of-the-art methods on four widely-used benchmarks, including CIFAR-10C, CIFAR-100C, ImageNet-C, and VLCS datasets.
translated by 谷歌翻译
测试时间的域变化在实践中是不可避免的。测试时间适应性通过在部署过程中调整模型来解决此问题。从理论上讲,最近的工作表明,自我训练可能是逐渐域移动的强大方法。在这项工作中,我们显示了渐进域适应与测试时间适应之间的自然联系。我们发布了一个名为Carlatta的新合成数据集,该数据集允许在测试时间期间探索渐进的域移动,并评估无监督域适应和测试时间适应的几种方法。我们提出了一种基于自我训练和样式转移的新方法GTTA。GTTA明确利用渐进域移动并在该区域设置新标准。我们进一步证明了我们的方法对连续和逐渐的CIFAR10C,CIFAR100C和Imagenet-C基准的有效性。
translated by 谷歌翻译
测试时间适应(TTA)是一个新兴范式,可解决培训和测试阶段之间的分布变化,而无需其他数据采集或标签成本;仅使用未标记的测试数据流进行连续模型适应。以前的TTA方案假设测试样本是独立的,并且分布相同(i.i.d.),即使它们在应用程序方案中通常在时间上相关(non-i.i.d。),例如自动驾驶。我们发现,在这种情况下,大多数现有的TTA方法急剧失败。由此激励,我们提出了一种新的测试时间适应方案,该方案对非I.I.D具有强大的态度。测试数据流。我们的新颖性主要是两倍:(a)纠正分布样本的归一化的实例感知批归归量表(IABN),以及(b)模拟I.I.D.的预测均衡储层采样(PBRS)。来自非i.i.d的数据流。以班级平衡的方式流式传输。我们对各种数据集的评估,包括现实世界非i.i.d。流,表明所提出的强大TTA不仅优于非i.i.d的最先进的TTA算法。设置,但也可以实现与I.I.D.下的这些算法相当的性能。假设。
translated by 谷歌翻译
深度学习模型的最新发展,捕捉作物物候的复杂的时间模式有卫星图像时间序列(坐在),大大高级作物分类。然而,当施加到目标区域从训练区空间上不同的,这些模型差没有任何目标标签由于作物物候区域之间的时间位移进行。为了解决这个无人监督跨区域适应环境,现有方法学域不变特征没有任何目标的监督,而不是时间偏移本身。因此,这些技术提供了SITS只有有限的好处。在本文中,我们提出TimeMatch,一种新的无监督领域适应性方法SITS直接占时移。 TimeMatch由两个部分组成:1)时间位移的估计,其估计具有源极训练模型的未标记的目标区域的时间偏移,和2)TimeMatch学习,它结合了时间位移估计与半监督学习到一个分类适应未标记的目标区域。我们还引进了跨区域适应的开放式访问的数据集与来自欧洲四个不同区域的旁边。在此数据集,我们证明了TimeMatch优于所有竞争的方法,通过11%的在五个不同的适应情景F1-得分,创下了新的国家的最先进的跨区域适应性。
translated by 谷歌翻译
分批归一化(BN)是一种无处不在的技术,用于训练深层神经网络,可加速其收敛以达到更高的准确性。但是,我们证明了BN具有根本的缺点:它激励该模型依赖于训练(内域)数据高度特定的低变义特征,从而损害了室外示例的概括性能。在这项工作中,我们首先表明在各种架构上删除BN层会导致较低的域外和腐败错误,而造成较高的内域错误,因此我们首先研究了这种现象。然后,我们提出了反平衡老师(CT),该方法利用与老师的老师一起利用同一模型的冷冻副本,通过通过一致性损失功能实质上调整其权重来实现学生网络对强大表示的学习。该正则化信号有助于CT在不可预见的数据变化中表现良好,即使没有从目标域中的信息如先前的工作中。从理论上讲,我们在过度参数化的线性回归设置中显示了为什么归一化导致模型对这种内域特征的依赖,并通过验证CT的功效来证明CT的功效,从而在稳健性基准(例如CIFAR-10-C,CIFAR-10-C,CIFAR-100-C,CIFAR-100-C,CIFAR-100-C,CIFAR-100-C,CIFAR-100-C,CIFAR-100-C,CIFAR-100-C,CIFAR-100)上表现出了疗效。和VLCS。
translated by 谷歌翻译
在本文中,我们的目标是在测试时调整预训练的卷积神经网络对域的变化。我们在没有标签的情况下,不断地使用传入的测试批次流。现有文献主要是基于通过测试图像的对抗扰动获得的人工偏移。在此激励的情况下,我们在域转移的两个现实和挑战的来源(即背景和语义转移)上评估了艺术的状态。上下文移动与环境类型相对应,例如,在室内上下文上预先训练的模型必须适应Core-50上的户外上下文[7]。语义转移对应于捕获类型,例如,在自然图像上预先训练的模型必须适应域网上的剪贴画,草图和绘画[10]。我们在分析中包括了最近的技术,例如预测时间批归一化(BN)[8],测试熵最小化(帐篷)[16]和持续的测试时间适应(CottA)[17]。我们的发现是三个方面的:i)测试时间适应方法的表现更好,并且与语义转移相比,在上下文转移方面忘记了更少的忘记,ii)帐篷在短期适应方面的表现优于其他方法,而Cotta则超过了其他关于长期适应的方法, iii)bn是最可靠和强大的。
translated by 谷歌翻译
Models should be able to adapt to unseen data during test-time to avoid performance drops caused by inevitable distribution shifts in real-world deployment scenarios. In this work, we tackle the practical yet challenging test-time adaptation (TTA) problem, where a model adapts to the target domain without accessing the source data. We propose a simple recipe called \textit{Data-efficient Prompt Tuning} (DePT) with two key ingredients. First, DePT plugs visual prompts into the vision Transformer and only tunes these source-initialized prompts during adaptation. We find such parameter-efficient finetuning can efficiently adapt the model representation to the target domain without overfitting to the noise in the learning objective. Second, DePT bootstraps the source representation to the target domain by memory bank-based online pseudo-labeling. A hierarchical self-supervised regularization specially designed for prompts is jointly optimized to alleviate error accumulation during self-training. With much fewer tunable parameters, DePT demonstrates not only state-of-the-art performance on major adaptation benchmarks VisDA-C, ImageNet-C, and DomainNet-126, but also superior data efficiency, i.e., adaptation with only 1\% or 10\% data without much performance degradation compared to 100\% data. In addition, DePT is also versatile to be extended to online or multi-source TTA settings.
translated by 谷歌翻译
When facing changing environments in the real world, the lightweight model on client devices suffers from severe performance drops under distribution shifts. The main limitations of the existing device model lie in (1) unable to update due to the computation limit of the device, (2) the limited generalization ability of the lightweight model. Meanwhile, recent large models have shown strong generalization capability on the cloud while they can not be deployed on client devices due to poor computation constraints. To enable the device model to deal with changing environments, we propose a new learning paradigm of Cloud-Device Collaborative Continual Adaptation, which encourages collaboration between cloud and device and improves the generalization of the device model. Based on this paradigm, we further propose an Uncertainty-based Visual Prompt Adapted (U-VPA) teacher-student model to transfer the generalization capability of the large model on the cloud to the device model. Specifically, we first design the Uncertainty Guided Sampling (UGS) to screen out challenging data continuously and transmit the most out-of-distribution samples from the device to the cloud. Then we propose a Visual Prompt Learning Strategy with Uncertainty guided updating (VPLU) to specifically deal with the selected samples with more distribution shifts. We transmit the visual prompts to the device and concatenate them with the incoming data to pull the device testing distribution closer to the cloud training distribution. We conduct extensive experiments on two object detection datasets with continually changing environments. Our proposed U-VPA teacher-student framework outperforms previous state-of-the-art test time adaptation and device-cloud collaboration methods. The code and datasets will be released.
translated by 谷歌翻译
Vision Transformer(VIT)在图像处理中变得越来越流行。具体而言,我们研究了测试时间适应(TTA)对VIT的有效性,VIT是一种已经出现的技术,可以自行纠正其在测试时间期间的预测。首先,我们在VIT-B16和VIT-L16上基准了各种测试时间适应方法。结果表明,使用适当的损耗函数时,TTA对VIT有效,并且先前的投入(明智地选择调制参数)是不需要的。基于观察结果,我们提出了一种称为类条件特征对齐(CFA)的新的测试时间适应方法,该方法将类别条件分布的差异和在线源中隐藏表示的整个分布差异最小化,在线中的整个分布差异方式。图像分类任务(CIFAR-10-C,CIFAR-100-C和Imagenet-C)和域适应性(Digits DataSet和Imagenet-Sketch)的实验表明,CFA稳定地超过了各种数据集中的现有基础。我们还通过在RESNET,MLP混合和几种VIT变体(Vit-augreg,Deit和Beit)上实验来验证CFA是模型不可知论。使用BEIT主链,CFA在Imagenet-C上达到了19.8%的TOP-1错误率,表现优于现有的测试时间适应基线44.0%。这是不需要改变训练阶段的TTA方法中的最新结果。
translated by 谷歌翻译
We present Noisy Student Training, a semi-supervised learning approach that works well even when labeled data is abundant. Noisy Student Training achieves 88.4% top-1 accuracy on ImageNet, which is 2.0% better than the state-of-the-art model that requires 3.5B weakly labeled Instagram images. On robustness test sets, it improves ImageNet-A top-1 accuracy from 61.0% to 83.7%, reduces ImageNet-C mean corruption error from 45.7 to 28.3, and reduces ImageNet-P mean flip rate from 27.8 to 12.2.Noisy Student Training extends the idea of self-training and distillation with the use of equal-or-larger student models and noise added to the student during learning. On Im-ageNet, we first train an EfficientNet model on labeled images and use it as a teacher to generate pseudo labels for 300M unlabeled images. We then train a larger Efficient-Net as a student model on the combination of labeled and pseudo labeled images. We iterate this process by putting back the student as the teacher. During the learning of the student, we inject noise such as dropout, stochastic depth, and data augmentation via RandAugment to the student so that the student generalizes better than the teacher. 1 * This work was conducted at Google.
translated by 谷歌翻译
In this paper, we propose Test-Time Training, a general approach for improving the performance of predictive models when training and test data come from different distributions. We turn a single unlabeled test sample into a self-supervised learning problem, on which we update the model parameters before making a prediction. This also extends naturally to data in an online stream. Our simple approach leads to improvements on diverse image classification benchmarks aimed at evaluating robustness to distribution shifts.
translated by 谷歌翻译
Unsupervised domain adaptation (UDA) via deep learning has attracted appealing attention for tackling domain-shift problems caused by distribution discrepancy across different domains. Existing UDA approaches highly depend on the accessibility of source domain data, which is usually limited in practical scenarios due to privacy protection, data storage and transmission cost, and computation burden. To tackle this issue, many source-free unsupervised domain adaptation (SFUDA) methods have been proposed recently, which perform knowledge transfer from a pre-trained source model to unlabeled target domain with source data inaccessible. A comprehensive review of these works on SFUDA is of great significance. In this paper, we provide a timely and systematic literature review of existing SFUDA approaches from a technical perspective. Specifically, we categorize current SFUDA studies into two groups, i.e., white-box SFUDA and black-box SFUDA, and further divide them into finer subcategories based on different learning strategies they use. We also investigate the challenges of methods in each subcategory, discuss the advantages/disadvantages of white-box and black-box SFUDA methods, conclude the commonly used benchmark datasets, and summarize the popular techniques for improved generalizability of models learned without using source data. We finally discuss several promising future directions in this field.
translated by 谷歌翻译
部署的ML模型的基本要求是从与培训不同的测试分布中汲取的数据概括。解决此问题的一个流行解决方案是,仅使用未标记的数据将预训练的模型调整为新的域。在本文中,我们关注该问题的挑战性变体,其中访问原始源数据受到限制。虽然完全测试时间适应(FTTA)和无监督的域适应性(UDA)密切相关,但由于大多数UDA方法需要访问源数据,因此UDA的进展不容易适用于TTA。因此,我们提出了一种新方法,即Cattan,它通过放松了通过新颖的深层子空间对准策略来放松访问整个源数据的需求,从而弥合了UDA和FTTA。通过为源数据存储的子空间基础设置的最小开销,Cattan在适应过程中可以在源数据和目标数据之间进行无监督的对齐。通过对多个2D和3D Vision基准测试(Imagenet-C,Office-31,OfficeHome,Domainnet,PointDa-10)和模型体系结构进行广泛的实验评估,我们在FTTA性能方面表现出显着提高。此外,即使使用固有健壮的模型,预训练的VIT表示以及目标域中的样本可用性低,我们也会对对齐目标的实用性做出许多关键发现。
translated by 谷歌翻译
测试时间适应(TTA)是指适应神经网络以进行分配变化,仅在测试时间内从新域中访问未标记的测试样本。先前的TTA方法优化了无监督的目标,例如帐篷中的模型预测的熵[Wang等,2021],但目前尚不清楚到底是什么使TTA损失良好。在本文中,我们首先提出一个令人惊讶的现象:如果我们尝试在广泛的功能上衡量最佳的TTA损失,那么我们恢复了与(温度缩放版本的)非常相似的函数帐篷采用的软磁性 - 凝集。但是,只有在我们正在适应的分类器通过跨凝结训练的情况下,这才能保持;如果通过平方损失训练,则会出现不同的最佳TTA损失。为了解释这一现象,我们通过训练损失的凸结合物分析了TTA。我们表明,在自然条件下,这种(无监督的)共轭功能可以看作是对原始监督损失的局部近似值,实际上,它恢复了元学习发现的最佳损失。这导致了一种通用食谱,可用于为通用类的任何给定监督培训损失功能找到良好的TTA损失。从经验上讲,我们的方法始终在广泛的基准测试中统治其他基线。当应用于新型损失功能的分类器时,我们的方法尤其令人感兴趣,例如,最近所传播的polyloss与基于熵的损失有很大的不同。此外,我们表明我们的方法也可以用非常特定的软标签解释为一种自我训练,我们将其称为共轭伪标记。总体而言,我们的方法为更好地理解和改善测试时间适应提供了广泛的框架。代码可在https://github.com/locuslab/tta_conjugate上找到。
translated by 谷歌翻译
在测试时间适应(TTA)中,给定在某些源数据上培训的模型,目标是使其适应从不同分布的测试实例更好地预测。至关重要的是,TTA假设从目标分布到Finetune源模型,无法访问源数据或甚至从目标分布到任何其他标记/未标记的样本。在这项工作中,我们考虑TTA在更务实的设置中,我们称为SITA(单图像测试时间适应)。这里,在制作每个预测时,该模型只能访问给定的\ emph {单}测试实例,而不是实例的\ emph {批次}。通常在文献中被考虑。这是由逼真的情况激励,其中在按需时尚中需要推断,可能不会被延迟到“批量 - iFY”传入请求或者在没有范围的边缘设备(如移动电话中)发生推断批处理。 SITA的整个适应过程应在推理时间发生时非常快。为了解决这个问题,我们提出了一种新颖的AUGBN,用于仅需要转发传播的SITA设置。该方法可以为分类和分段任务的单个测试实例调整任何特征训练模型。 AUGBN估计仅使用具有标签保存的转换的一个前进通过的给定测试图像的看不见的测试分布的正常化统计。由于AUGBN不涉及任何反向传播,与其他最近的方法相比,它显着更快。据我们所知,这是仅使用单个测试图像解决此硬调整问题的第一个工作。尽管非常简单,但我们的框架能够在我们广泛的实验和消融研究中对目标实例上应用源模型来实现显着的性能增益。
translated by 谷歌翻译
我们提出了一个学习域移位的校准不确定性的框架。我们考虑源(训练)分布与目标(测试)分布不同的情况。我们通过使用二进制域分类器来检测此类域移位,并将其与任务网络集成并将其联合结束到底。二进制域分类器产生密度比,其反映目标(测试)样本的近距离源(训练)分布。我们雇用它来调整任务网络预测的不确定性。这种使用密度比的思想基于分布稳健的学习(DRL)框架,其通过对抗风险最小化来占域移位。我们证明我们的方法产生校准的不确定性,这些不确定性有利于许多下游任务,例如无监督的域适应(UDA)和半监督学习(SSL)。在这些任务中,像自我训练和纤维型等方法使用不确定性选择自信的伪标签进行重新培训。我们的实验表明,DRL的引入导致跨域性能的显着改善。我们还证明估计的密度比率与人类选择频率达成协议,表明与人类感知的不确定性的代理有正相关。
translated by 谷歌翻译
尽管他们最近取得了成功,但在测试时遇到分配变化时,深层神经网络仍会继续表现不佳。最近,许多提出的方法试图通过将模型与推理之前的新分布对齐来解决。由于没有可用的标签,因此需要无监督的目标才能使模型适应观察到的测试数据。在本文中,我们提出了测试时间自我训练(测试):一种技术,该技术在测试时以某些源数据和新的数据分配为输入,并使用学生教师框架来学习不变且强大的表示形式。 。我们发现使用测试适应的模型可以显着改善基线测试时间适应算法。测试可以实现现代领域适应算法的竞争性能,同时自适应时访问5-10倍的数据。我们对两项任务进行了各种基准:对象检测和图像分割,并发现该模型适用于测试。我们发现测试设置了用于测试时间域适应算法的新最新技术。
translated by 谷歌翻译
完全监督分类的问题是,它需要大量的注释数据,但是,在许多数据集中,很大一部分数据是未标记的。为了缓解此问题,半监督学习(SSL)利用了标记域上的分类器知识,并将其推送到无标记的域,该域具有与注释数据相似的分布。 SSL方法的最新成功至关重要地取决于阈值伪标记,从而对未标记的域的一致性正则化。但是,现有方法并未在训练过程中纳入伪标签或未标记样品的不确定性,这是由于嘈杂的标签或由于强大的增强而导致的分布样品。受SSL最近发展的启发,我们本文的目标是提出一个新颖的无监督不确定性意识的目标,依赖于核心和认识论不确定性量化。通过提出的不确定性感知损失功能,我们的方法优于标准SSL基准,在计算轻量级的同时,与最新的方法相匹配,或与最先进的方法相提并论。我们的结果优于复杂数据集(例如CIFAR-100和MINI-IMAGENET)的最新结果。
translated by 谷歌翻译