自我监督学习的最新进步降低了监督和无监督的代表学习之间的差距。然而,大多数自我监督和深度聚类技术严重依赖于数据增强,使它们无效,对于许多学习任务,域名知识存在不足以进行增强的学习任务。我们提出了一种新的域 - 无症集群的自蒸馏算法。我们的方法在现有的深度聚类框架上构建,不需要单独的学生模型。所提出的方法优于CIFAR-10上现有的现有域不可知(增强)算法。我们经验证明,知识蒸馏可以通过从模型中提取比单独使用预测的标签来改善来自模型的更丰富的“黑暗知识”来改善无监督的代表学习。初步实验还表明,自蒸馏改善了DeepCluster-V2的收敛性。
translated by 谷歌翻译
卷积神经网络(CNN)越来越多地用于自动化磁共振(MR)图像中脑结构的分割,以进行研究。在其他应用中,CNN模型在训练集中的代表性不足时已显示出对某些人口组的偏见。在这项工作中,我们研究了CNN大脑MR分割模型是否有可能在接受不平衡训练集训练时遏制性别或种族偏见。我们使用白人受试者中不同水平的性不平衡训练快速冲浪模型的多个实例。我们分别评估白人男性和白人女性测试集以评估性别偏见的性能,并在黑人男性和黑人女性测试套装上评估它们,以评估潜在的种族偏见。我们发现分割模型性能中的重大性别和种族偏见效应。这些偏见具有很强的空间成分,一些大脑区域表现出比其他大脑更强的偏见。总体而言,我们的结果表明,种族偏见比性偏见更为重要。我们的研究表明,在为基于CNN的大脑MR分割的训练集时考虑种族和性别平衡的重要性,以避免通过有偏见的研究研究结果来维持甚至加剧现有的健康不平等。
translated by 谷歌翻译
估计深神经网络(DNN)的概括误差(GE)是一项重要任务,通常依赖于持有数据的可用性。基于单个训练集更好地预测GE的能力可能会产生总体DNN设计原则,以减少对试用和错误的依赖以及其他绩效评估优势。为了寻找与GE相关的数量,我们使用无限宽度DNN限制到绑定的MI,研究了输入和最终层表示之间的相互信息(MI)。现有的基于输入压缩的GE绑定用于链接MI和GE。据我们所知,这代表了该界限的首次实证研究。为了实证伪造理论界限,我们发现它通常对于表现最佳模型而言通常很紧。此外,它在许多情况下检测到训练标签的随机化,反映了测试时间扰动的鲁棒性,并且只有很少的培训样本就可以很好地工作。考虑到输入压缩是广泛适用的,可以在信心估算MI的情况下,这些结果是有希望的。
translated by 谷歌翻译
从样本中学习概率分布的任务在整个自然科学中无处不在。局部量子电路的输出分布构成了一类特别有趣的分布类别,对量子优势提案和各种量子机学习算法都具有关键的重要性。在这项工作中,我们提供了局部量子电路输出分布的可学习性的广泛表征。我们的第一个结果可以深入了解这些分布的有效学习性与有效的可模拟性之间的关系。具体而言,我们证明与Clifford电路相关的密度建模问题可以有效地解决,而对于深度$ d = n^{\ omega(1)} $电路,将单个$ t $ gate注入到电路中,这使这是如此问题很难。该结果表明,有效的模拟性并不意味着有效的可学习性。我们的第二组结果提供了对量子生成建模算法的潜在和局限性的见解。我们首先证明与深度$ d = n^{\ omega(1)} $局部量子电路相关的生成建模问题对于任何学习算法,经典或量子都很难。结果,一个人不能使用量子算法来为此任务获得实际优势。然后,我们证明,对于各种最实际相关的学习算法(包括混合量词古典算法),即使是与深度$ d = \ omega(\ log(n))$ Clifford Circuits相关的生成建模问题也是如此难的。该结果对近期混合量子古典生成建模算法的适用性造成了限制。
translated by 谷歌翻译
我们考虑在微观级别的坡道计量,但受车辆安全限制的约束。交通网络由带有多个在越野和外坡道的环路抽象。车辆到达坡道的到达时间及其目的地外坡道是由外源随机过程建模的。一旦车辆从坡道上释放出来,如果没有另一辆车阻塞,它就会加速自由流速。一旦它靠近另一辆车,便会采用安全的行为。车辆到达目的地外坡道后,车辆将退出交通网络。我们设计流量响应的坡道计量策略,以最大程度地提高网络的饱和区域。策略的饱和区域定义为一组需求,即到达率和路由矩阵,所有坡道的队列长度都在预期中保持限制。提出的坡道计量策略是在同步循环下运行的,在此期间,坡道在周期开始时不会释放更多的车辆长度。我们提供三个策略,分别在周期结束时分别暂停每个坡度(i)暂停时间间隔,或(ii)在周期内调节释放率,或(iii)采用保守的安全性在周期中释放的标准。但是,所有政策都不需要有关需求的信息。这些策略的饱和区域的特征是研究诱导的马尔可夫链的随机稳定性,当所有坡道的合并速度等于自由流速时,被证明是最大的。提供模拟以说明政策的性能。
translated by 谷歌翻译
深度神经网络无法推广到分布数据是一个众所周知的问题,并引起了人们对在安全关键领域(例如医疗保健,金融和自动驾驶汽车)部署训练的网络的担忧。我们研究了一种特定的分销偏移$ \ unicode {x2013} $快捷方式或培训数据中的虚假相关性。快捷方式学习通常仅在对不包含相同伪造相关性的现实世界数据进行评估时才能暴露出来,这使AI从业人员适当评估训练有素的现实世界应用模型的有效性构成了严重的困境。在这项工作中,我们建议在学习的表示和输入之间使用共同信息(MI)作为指标,以查找培训中的位置,网络锁定在快捷方式上。实验表明,MI可以用作监测快捷方式学习的域敏捷度量。
translated by 谷歌翻译
Variational inference uses optimization, rather than integration, to approximate the marginal likelihood, and thereby the posterior, in a Bayesian model. Thanks to advances in computational scalability made in the last decade, variational inference is now the preferred choice for many high-dimensional models and large datasets. This tutorial introduces variational inference from the parametric perspective that dominates these recent developments, in contrast to the mean-field perspective commonly found in other introductory texts.
translated by 谷歌翻译
Knowledge graphs (KG) have served as the key component of various natural language processing applications. Commonsense knowledge graphs (CKG) are a special type of KG, where entities and relations are composed of free-form text. However, previous works in KG completion and CKG completion suffer from long-tail relations and newly-added relations which do not have many know triples for training. In light of this, few-shot KG completion (FKGC), which requires the strengths of graph representation learning and few-shot learning, has been proposed to challenge the problem of limited annotated data. In this paper, we comprehensively survey previous attempts on such tasks in the form of a series of methods and applications. Specifically, we first introduce FKGC challenges, commonly used KGs, and CKGs. Then we systematically categorize and summarize existing works in terms of the type of KGs and the methods. Finally, we present applications of FKGC models on prediction tasks in different areas and share our thoughts on future research directions of FKGC.
translated by 谷歌翻译
Few Shot Instance Segmentation (FSIS) requires models to detect and segment novel classes with limited several support examples. In this work, we explore a simple yet unified solution for FSIS as well as its incremental variants, and introduce a new framework named Reference Twice (RefT) to fully explore the relationship between support/query features based on a Transformer-like framework. Our key insights are two folds: Firstly, with the aid of support masks, we can generate dynamic class centers more appropriately to re-weight query features. Secondly, we find that support object queries have already encoded key factors after base training. In this way, the query features can be enhanced twice from two aspects, i.e., feature-level and instance-level. In particular, we firstly design a mask-based dynamic weighting module to enhance support features and then propose to link object queries for better calibration via cross-attention. After the above steps, the novel classes can be improved significantly over our strong baseline. Additionally, our new framework can be easily extended to incremental FSIS with minor modification. When benchmarking results on the COCO dataset for FSIS, gFSIS, and iFSIS settings, our method achieves a competitive performance compared to existing approaches across different shots, e.g., we boost nAP by noticeable +8.2/+9.4 over the current state-of-the-art FSIS method for 10/30-shot. We further demonstrate the superiority of our approach on Few Shot Object Detection. Code and model will be available.
translated by 谷歌翻译
Unsupervised domain adaptation (UDA) for semantic segmentation is a promising task freeing people from heavy annotation work. However, domain discrepancies in low-level image statistics and high-level contexts compromise the segmentation performance over the target domain. A key idea to tackle this problem is to perform both image-level and feature-level adaptation jointly. Unfortunately, there is a lack of such unified approaches for UDA tasks in the existing literature. This paper proposes a novel UDA pipeline for semantic segmentation that unifies image-level and feature-level adaptation. Concretely, for image-level domain shifts, we propose a global photometric alignment module and a global texture alignment module that align images in the source and target domains in terms of image-level properties. For feature-level domain shifts, we perform global manifold alignment by projecting pixel features from both domains onto the feature manifold of the source domain; and we further regularize category centers in the source domain through a category-oriented triplet loss and perform target domain consistency regularization over augmented target domain images. Experimental results demonstrate that our pipeline significantly outperforms previous methods. In the commonly tested GTA5$\rightarrow$Cityscapes task, our proposed method using Deeplab V3+ as the backbone surpasses previous SOTA by 8%, achieving 58.2% in mIoU.
translated by 谷歌翻译