域对抗训练无处不在地实现不变表示,并广泛用于各种域适应任务。近来,融合到平滑最佳的方法已显示出对分类等监督学习任务的改进的概括。在这项工作中,我们分析了增强配方对域对抗训练的影响,其目的是任务损失(例如分类,回归等)和对抗性术语的组合。我们发现,相对于(W.R.T.)任务损失融合了平滑的最小值,可以稳定对抗性训练,从而在目标域上获得更好的性能。与任务损失相反,我们的分析表明,融合到平滑的最小W.R.T.对抗损失导致目标结构域的次级概括。基于分析,我们介绍了平滑的域对抗训练(SDAT)程序,该程序有效地增强了现有域对抗方法的性能,以进行分类和对象检测任务。我们的分析还提供了对社区中亚当(Adam)对域名对抗训练的广泛使用的洞察力。
translated by 谷歌翻译
Real-world datasets exhibit imbalances of varying types and degrees. Several techniques based on re-weighting and margin adjustment of loss are often used to enhance the performance of neural networks, particularly on minority classes. In this work, we analyze the class-imbalanced learning problem by examining the loss landscape of neural networks trained with re-weighting and margin-based techniques. Specifically, we examine the spectral density of Hessian of class-wise loss, through which we observe that the network weights converge to a saddle point in the loss landscapes of minority classes. Following this observation, we also find that optimization methods designed to escape from saddle points can be effectively used to improve generalization on minority classes. We further theoretically and empirically demonstrate that Sharpness-Aware Minimization (SAM), a recent technique that encourages convergence to a flat minima, can be effectively used to escape saddle points for minority classes. Using SAM results in a 6.2\% increase in accuracy on the minority classes over the state-of-the-art Vector Scaling Loss, leading to an overall average increase of 4\% across imbalanced datasets. The code is available at: https://github.com/val-iisc/Saddle-LongTail.
translated by 谷歌翻译
传统的域适应性(DA)技术旨在通过学习领域不变表示来改善域的可传递性;同时保留从标记的源数据中收集的任务歧义性知识。但是,同时访问标签源和未标记的目标的要求使其不适合无源的无源DA设置。实现有效原件到通用域映射的微不足道的解决方案可改善可转移性,但会降低任务可区分性。从理论和经验的角度分析障碍后,我们得出了新颖的见解,以表明原始和相应的翻译通用样品之间的混合会增强可区分性可转移性权衡,同时适当尊重以隐私为导向的无源源环境。在现有的无源DA方法之上,简单但有效地实现了所提出的见解,可产生最先进的性能,并更快地收敛。除了单源外,我们还胜过分类和语义分割基准的多源先验艺术。
translated by 谷歌翻译
无监督的域适应性(DA)中的主要挑战是减轻源域和目标域之间的域移动。先前的DA工作表明,可以使用借口任务来通过学习域不变表示来减轻此域的转移。但是,实际上,我们发现大多数现有的借口任务对其他已建立的技术无效。因此,我们从理论上分析了如何以及何时可以利用子公司借口任务来协助给定DA问题的目标任务并制定客观的子公司任务适用性标准。基于此标准,我们设计了一个新颖的贴纸干预过程和铸造贴纸分类的过程,作为监督的子公司DA问题,该问题与目标任务无监督的DA同时发生。我们的方法不仅改善了目标任务适应性能,而且还促进了面向隐私的无源DA,即没有并发源目标访问。标准Office-31,Office-Home,Domainnet和Visda基准的实验证明了我们对单源和多源无源DA的优势。我们的方法还补充了现有的无源作品,从而实现了领先的绩效。
translated by 谷歌翻译
域适应(da)尝试将知识从标记的源域传输到从源的不同分发的未标记的目标域。为此,DA方法包括源分类目标,以提取源知识和域对齐目标以减少域移位,确保知识转移。通常,前DA方法采用一些重量的超参数来线性地结合培训目标来形成整体目标。然而,由于域移位,这些目标的梯度方向可能彼此冲突。在这种情况下,线性优化方案可能会降低整体目标值,以损坏其中一个培训目标,导致限制解决方案。在本文中,我们从基于梯度的角度来看了DA的优化方案。我们提出了帕累托域适应(Paretoda)方法来控制整体优化方向,旨在协同优化所有培训目标。具体地,为了达到目标域的理想解决方案,我们设计了模拟目标分类的替代损失。为了提高目标预测准确性以支持模拟,我们提出了一种目标预测精炼机制,其通过贝叶斯定理利用域标签。另一方面,由于对象的加权方案的先验知识通常无法指导优化来接近目标域上的最佳解决方案,因此我们提出了一种动态的偏好机制,以动态指导我们的合作优化通过替代损失的梯度保持未标记的目标数据集。关于图像分类和语义分割基准的广泛实验证明了Paretoda的有效性
translated by 谷歌翻译
对抗性学习策略在处理单源域适应(DA)问题时表现出显着的性能,并且最近已应用于多源DA(MDA)问题。虽然大多数现有的MDA策略依赖于多个域歧视员设置,但其对潜伏空间表示的影响已经不知识。在这里,我们采用了一种信息 - 理论方法来识别和解决MDA上多个域鉴别器的潜在不利影响:域歧视信息的解体,有限的计算可扩展性以及培训期间损失梯度的大方差。我们在信息正规化的背景下通过情况进行对抗性DA来检查上述问题。这还提供了使用单一和统一域鉴别器的理论正当理由。基于这个想法,我们实施了一种名为多源信息正规化适应网络(MIAN)的新型神经结构。大规模实验表明,尽管其结构简洁,可靠,可显着优于其他最先进的方法。
translated by 谷歌翻译
This paper addresses the problem of unsupervised domain adaption from theoretical and algorithmic perspectives. Existing domain adaptation theories naturally imply minimax optimization algorithms, which connect well with the domain adaptation methods based on adversarial learning. However, several disconnections still exist and form the gap between theory and algorithm. We extend previous theories (Mansour et al., 2009c;Ben-David et al., 2010) to multiclass classification in domain adaptation, where classifiers based on the scoring functions and margin loss are standard choices in algorithm design. We introduce Margin Disparity Discrepancy, a novel measurement with rigorous generalization bounds, tailored to the distribution comparison with the asymmetric margin loss, and to the minimax optimization for easier training. Our theory can be seamlessly transformed into an adversarial learning algorithm for domain adaptation, successfully bridging the gap between theory and algorithm. A series of empirical studies show that our algorithm achieves the state of the art accuracies on challenging domain adaptation tasks.
translated by 谷歌翻译
最近,无监督的域适应是一种有效的范例,用于概括深度神经网络到新的目标域。但是,仍有巨大的潜力才能达到完全监督的性能。在本文中,我们提出了一种新颖的主动学习策略,以帮助目标域中的知识转移,有效域适应。我们从观察开始,即当训练(源)和测试(目标)数据来自不同的分布时,基于能量的模型表现出自由能量偏差。灵感来自这种固有的机制,我们经验揭示了一种简单而有效的能源 - 基于能量的采样策略揭示了比需要特定架构或距离计算的现有方法的最有价值的目标样本。我们的算法,基于能量的活动域适应(EADA),查询逻辑数据组,它将域特征和实例不确定性结合到每个选择回合中。同时,通过通过正则化术语对准源域周围的目标数据紧凑的自由能,可以隐含地减少域间隙。通过广泛的实验,我们表明EADA在众所周知的具有挑战性的基准上超越了最先进的方法,具有实质性的改进,使其成为开放世界中的一个有用的选择。代码可在https://github.com/bit-da/eada获得。
translated by 谷歌翻译
Unsupervised domain adaptation (UDA) aims to transfer knowledge from a well-labeled source domain to a different but related unlabeled target domain with identical label space. Currently, the main workhorse for solving UDA is domain alignment, which has proven successful. However, it is often difficult to find an appropriate source domain with identical label space. A more practical scenario is so-called partial domain adaptation (PDA) in which the source label set or space subsumes the target one. Unfortunately, in PDA, due to the existence of the irrelevant categories in the source domain, it is quite hard to obtain a perfect alignment, thus resulting in mode collapse and negative transfer. Although several efforts have been made by down-weighting the irrelevant source categories, the strategies used tend to be burdensome and risky since exactly which irrelevant categories are unknown. These challenges motivate us to find a relatively simpler alternative to solve PDA. To achieve this, we first provide a thorough theoretical analysis, which illustrates that the target risk is bounded by both model smoothness and between-domain discrepancy. Considering the difficulty of perfect alignment in solving PDA, we turn to focus on the model smoothness while discard the riskier domain alignment to enhance the adaptability of the model. Specifically, we instantiate the model smoothness as a quite simple intra-domain structure preserving (IDSP). To our best knowledge, this is the first naive attempt to address the PDA without domain alignment. Finally, our empirical results on multiple benchmark datasets demonstrate that IDSP is not only superior to the PDA SOTAs by a significant margin on some benchmarks (e.g., +10% on Cl->Rw and +8% on Ar->Rw ), but also complementary to domain alignment in the standard UDA
translated by 谷歌翻译
无监督域适应(UDA)旨在将知识从相关但不同的良好标记的源域转移到新的未标记的目标域。大多数现有的UDA方法需要访问源数据,因此当数据保密而不相配在隐私问题时,不适用。本文旨在仅使用培训的分类模型来解决现实设置,而不是访问源数据。为了有效地利用适应源模型,我们提出了一种新颖的方法,称为源假设转移(拍摄),其通过将目标数据特征拟合到冻结源分类模块(表示分类假设)来学习目标域的特征提取模块。具体而言,拍摄挖掘出于特征提取模块的信息最大化和自我监督学习,以确保目标特征通过同一假设与看不见的源数据的特征隐式对齐。此外,我们提出了一种新的标签转移策略,它基于预测的置信度(标签信息),然后采用半监督学习来将目标数据分成两个分裂,然后提高目标域中的较为自信预测的准确性。如果通过拍摄获得预测,我们表示标记转移为拍摄++。关于两位数分类和对象识别任务的广泛实验表明,拍摄和射击++实现了与最先进的结果超越或相当的结果,展示了我们对各种视域适应问题的方法的有效性。代码可用于\ url {https://github.com/tim-learn/shot-plus}。
translated by 谷歌翻译
清晰度感知最小化(SAM)是一种最近的训练方法,它依赖于最严重的重量扰动,可显着改善各种环境中的概括。我们认为,基于pac-bayes概括结合的SAM成功的现有理由,而收敛到平面最小值的想法是不完整的。此外,没有解释说在SAM中使用$ m $ sharpness的成功,这对于概括而言至关重要。为了更好地理解SAM的这一方面,我们理论上分析了其对角线性网络的隐式偏差。我们证明,SAM总是选择一种比标准梯度下降更好的解决方案,用于某些类别的问题,并且通过使用$ m $ -sharpness可以放大这种效果。我们进一步研究了隐性偏见在非线性网络上的特性,在经验上,我们表明使用SAM进行微调的标准模型可以导致显着的概括改进。最后,当与随机梯度一起使用时,我们为非凸目标提供了SAM的收敛结果。我们从经验上说明了深层网络的这些结果,并讨论了它们与SAM的概括行为的关系。我们的实验代码可在https://github.com/tml-epfl/understanding-sam上获得。
translated by 谷歌翻译
批量归一化(BN)广泛用于现代神经网络,已被证明代表与域相关知识,因此对于跨域任务(如无监督域适应(UDA))无效。现有的BN变体方法在归一化模块中相同信道中的源和目标域知识。然而,跨域跨域的相应通道的特征之间的错位通常导致子最佳的可转换性。在本文中,我们利用跨域关系并提出了一种新颖的归一化方法,互惠归一化(RN)。具体地,RN首先呈现互易补偿(RC)模块,用于基于跨域频道明智的相关性在两个域中获取每个信道的补偿。然后,RN开发互易聚合(RA)模块,以便以其跨域补偿组件自适应地聚合特征。作为BN的替代方案,RN更适合于UDA问题并且可以容易地集成到流行的域适应方法中。实验表明,所提出的RN优于现有的正常化对应物,通过大幅度,并有助于最先进的适应方法实现更好的结果。源代码可在https://github.com/openning07/reciprocal-normalization-for-da上找到。
translated by 谷歌翻译
Unsupervised domain adaptation (UDA) aims to leverage the knowledge learned from a labeled source dataset to solve similar tasks in a new unlabeled domain. Prior UDA methods typically require to access the source data when learning to adapt the model, making them risky and inefficient for decentralized private data. This work tackles a practical setting where only a trained source model is available and investigates how we can effectively utilize such a model without source data to solve UDA problems. We propose a simple yet generic representation learning framework, named Source HypOthesis Transfer (SHOT). SHOT freezes the classifier module (hypothesis) of the source model and learns the target-specific feature extraction module by exploiting both information maximization and selfsupervised pseudo-labeling to implicitly align representations from the target domains to the source hypothesis. To verify its versatility, we evaluate SHOT in a variety of adaptation cases including closed-set, partial-set, and open-set domain adaptation. Experiments indicate that SHOT yields state-of-the-art results among multiple domain adaptation benchmarks.
translated by 谷歌翻译
This work introduces the novel task of Source-free Multi-target Domain Adaptation and proposes adaptation framework comprising of \textbf{Co}nsistency with \textbf{N}uclear-Norm Maximization and \textbf{Mix}Up knowledge distillation (\textit{CoNMix}) as a solution to this problem. The main motive of this work is to solve for Single and Multi target Domain Adaptation (SMTDA) for the source-free paradigm, which enforces a constraint where the labeled source data is not available during target adaptation due to various privacy-related restrictions on data sharing. The source-free approach leverages target pseudo labels, which can be noisy, to improve the target adaptation. We introduce consistency between label preserving augmentations and utilize pseudo label refinement methods to reduce noisy pseudo labels. Further, we propose novel MixUp Knowledge Distillation (MKD) for better generalization on multiple target domains using various source-free STDA models. We also show that the Vision Transformer (VT) backbone gives better feature representation with improved domain transferability and class discriminability. Our proposed framework achieves the state-of-the-art (SOTA) results in various paradigms of source-free STDA and MTDA settings on popular domain adaptation datasets like Office-Home, Office-Caltech, and DomainNet. Project Page: https://sites.google.com/view/conmix-vcl
translated by 谷歌翻译
域的适应性(DA)旨在将知识从标记的源域中学习的知识转移到未标记或标记较小但相关的目标域的知识。理想情况下,源和目标分布应彼此平等地对齐,以实现公正的知识转移。但是,由于源和目标域中注释数据的数量之间存在显着不平衡,通常只有目标分布与源域保持一致,从而使不必要的源特定知识适应目标域,即偏置域的适应性。为了解决此问题,在这项工作中,我们通过对基于对抗性的DA方法进行建模来对歧视器的不确定性进行建模,以优化无偏见转移。我们理论上分析了DA中提出的无偏可传递性学习方法的有效性。此外,为了减轻注释数据不平衡的影响,我们利用了目标域中未标记样品的伪标签选择的估计不确定性,这有助于实现更好的边际和条件分布在域之间的分布。对各种DA基准数据集的广泛实验结果表明,可以轻松地将所提出的方法纳入各种基于对抗性的DA方法中,从而实现最新的性能。
translated by 谷歌翻译
无监督域适应(UDA)旨在将知识从标记的源域传输到未标记的目标域。传统上,基于子空间的方法为此问题形成了一类重要的解决方案。尽管他们的数学优雅和易腐烂性,但这些方法通常被发现在产生具有复杂的现实世界数据集的领域不变的功能时无效。由于近期具有深度网络的代表学习的最新进展,本文重新访问了UDA的子空间对齐,提出了一种新的适应算法,始终如一地导致改进的泛化。与现有的基于对抗培训的DA方法相比,我们的方法隔离了特征学习和分配对准步骤,并利用主要辅助优化策略来有效地平衡域不契约的目标和模型保真度。在提供目标数据和计算要求的显着降低的同时,基于子空间的DA竞争性,有时甚至优于几种标准UDA基准测试的最先进的方法。此外,子空间对准导致本质上定期的模型,即使在具有挑战性的部分DA设置中,也表现出强大的泛化。最后,我们的UDA框架的设计本身支持对测试时间的新目标域的逐步适应,而无需从头开始重新检测模型。总之,由强大的特征学习者和有效的优化策略提供支持,我们将基于子空间的DA建立为可视识别的高效方法。
translated by 谷歌翻译
无监督的域适应(UDA)旨在在数据集移位的存在下将知识从标记的源域传输到未标记的目标域。大多数现有方法无法解决域对齐和阶级识别良好,这可能会扭曲下游任务的内部数据结构(例如,分类)。为此,我们提出了一种新的几何感知模型,以通过核规范优化同时学习可转移性和可怜的性。从子空间几何的角度来看,我们为UDA介绍了UDA的域相干性和阶级正交性。域间相干性将确保模型具有更大的学习可分离表示的能力,并且类正交性将使集群之间的相关性最小化以减轻未对准。因此,它们是一致的,可以互相受益。此外,我们对UDA的基于标准的学习文学提供了理论上的洞察力,这确保了我们模型的可解释性。我们表明,预计域和集群的规范将分别更大,更小,以提高可转移性和可辨别性。标准UDA数据集的广泛实验结果证明了我们理论与模型的有效性。
translated by 谷歌翻译
模型不合时宜的元学习(MAML)目前是少量元学习的主要方法之一。尽管它具有有效性,但由于先天的二聚体问题结构,MAML的优化可能具有挑战性。具体而言,MAML的损失格局比其经验风险最小化的对应物更为复杂,可能的鞍点和局部最小化可能更复杂。为了应对这一挑战,我们利用了最近发明的清晰度最小化的最小化,并开发出一种清晰感的MAML方法,我们称其为Sharp MAML。我们从经验上证明,Sharp-MAML及其计算有效的变体可以胜过流行的现有MAML基准(例如,Mini-Imagenet上的$+12 \%$ $精度)。我们通过收敛速率分析和尖锐MAML的概括结合进行了经验研究。据我们所知,这是在双层学习背景下对清晰度感知最小化的第一个经验和理论研究。该代码可在https://github.com/mominabbass/sharp-maml上找到。
translated by 谷歌翻译
In today's heavily overparameterized models, the value of the training loss provides few guarantees on model generalization ability. Indeed, optimizing only the training loss value, as is commonly done, can easily lead to suboptimal model quality. Motivated by prior work connecting the geometry of the loss landscape and generalization, we introduce a novel, effective procedure for instead simultaneously minimizing loss value and loss sharpness. In particular, our procedure, Sharpness-Aware Minimization (SAM), seeks parameters that lie in neighborhoods having uniformly low loss; this formulation results in a minmax optimization problem on which gradient descent can be performed efficiently. We present empirical results showing that SAM improves model generalization across a variety of benchmark datasets (e.g., CIFAR-{10, 100}, Ima-geNet, finetuning tasks) and models, yielding novel state-of-the-art performance for several. Additionally, we find that SAM natively provides robustness to label noise on par with that provided by state-of-the-art procedures that specifically target learning with noisy labels. We open source our code at https: //github.com/google-research/sam. * Work done as part of the Google AI Residency program.
translated by 谷歌翻译
通过从完全标记的源域中利用数据,无监督域适应(UDA)通过显式差异最小化数据分布或对抗学习来提高未标记的目标域上的分类性能。作为增强,通过利用模型预测来加强目标特征识别期间涉及类别对齐。但是,在目标域上的错误类别预测中产生的伪标签不准确以及由源域的过度录制引起的分发偏差存在未探明的问题。在本文中,我们提出了一种模型 - 不可知的两阶段学习框架,这大大减少了使用软伪标签策略的缺陷模型预测,并避免了课程学习策略的源域上的过度拟合。从理论上讲,它成功降低了目标域上预期误差的上限的综合风险。在第一阶段,我们用分布对齐的UDA方法训练一个模型,以获得具有相当高的置位目标域上的软语义标签。为了避免在源域上的过度拟合,在第二阶段,我们提出了一种课程学习策略,以自适应地控制来自两个域的损失之间的加权,以便训练阶段的焦点从源分布逐渐移位到目标分布,以预测信心提升了目标分布在目标领域。对两个知名基准数据集的广泛实验验证了我们提出框架促进促进顶级UDA算法的性能的普遍效果,并展示其一致的卓越性能。
translated by 谷歌翻译