主动学习是减少训练深神经网络模型中数据量的流行方法。它的成功取决于选择有效的采集函数,该功能尚未根据其预期的信息进行排名。在不确定性抽样中,当前模型具有关于点类标签的不确定性是这种类型排名的主要标准。本文提出了一种在培训卷积神经网络(CNN)中进行不确定性采样的新方法。主要思想是使用CNN提取提取的特征表示作为培训总产品网络(SPN)的数据。由于SPN通常用于估计数据集的分布,因此它们非常适合估算类概率的任务,这些概率可以直接由标准采集函数(例如最大熵和变异比率)使用。此外,我们通过在SPN模型的帮助下通过权重增强了这些采集函数。这些权重使采集功能对数据点的可疑类标签的多样性更加敏感。我们的方法的有效性在对MNIST,时尚持续和CIFAR-10数据集的实验研究中得到了证明,我们将其与最先进的方法MC辍学和贝叶斯批次进行了比较。
translated by 谷歌翻译
Even though active learning forms an important pillar of machine learning, deep learning tools are not prevalent within it. Deep learning poses several difficulties when used in an active learning setting. First, active learning (AL) methods generally rely on being able to learn and update models from small amounts of data. Recent advances in deep learning, on the other hand, are notorious for their dependence on large amounts of data. Second, many AL acquisition functions rely on model uncertainty, yet deep learning methods rarely represent such model uncertainty. In this paper we combine recent advances in Bayesian deep learning into the active learning framework in a practical way. We develop an active learning framework for high dimensional data, a task which has been extremely challenging so far, with very sparse existing literature. Taking advantage of specialised models such as Bayesian convolutional neural networks, we demonstrate our active learning techniques with image data, obtaining a significant improvement on existing active learning approaches. We demonstrate this on both the MNIST dataset, as well as for skin cancer diagnosis from lesion images (ISIC2016 task).
translated by 谷歌翻译
主动学习在许多领域中展示了数据效率。现有的主动学习算法,特别是在深贝叶斯活动模型的背景下,严重依赖模型的不确定性估计的质量。然而,这种不确定性估计可能会严重偏见,特别是有限和不平衡的培训数据。在本文中,我们建议平衡,贝叶斯深度活跃的学习框架,减轻这种偏差的影响。具体地,平衡采用了一种新的采集功能,该函数利用了等效假设类别捕获的结构,并促进了不同的等价类别之间的分化。直观地,每个等价类包括具有类似预测的深层模型的实例化,并且平衡适应地将等同类的大小调整为学习进展。除了完整顺序设置之外,我们还提出批量平衡 - 顺序算法的泛化算法到批量设置 - 有效地选择批次的培训实施例,这些培训实施例是对模型改进的联合有效的培训实施例。我们展示批量平衡在多个基准数据集上实现了最先进的性能,用于主动学习,并且这两个算法都可以有效地处理通常涉及多级和不平衡数据的逼真挑战。
translated by 谷歌翻译
We develop BatchBALD, a tractable approximation to the mutual information between a batch of points and model parameters, which we use as an acquisition function to select multiple informative points jointly for the task of deep Bayesian active learning. BatchBALD is a greedy linear-time 1 − 1 /e-approximate algorithm amenable to dynamic programming and efficient caching. We compare BatchBALD to the commonly used approach for batch data acquisition and find that the current approach acquires similar and redundant points, sometimes performing worse than randomly acquiring data. We finish by showing that, using BatchBALD to consider dependencies within an acquisition batch, we achieve new state of the art performance on standard benchmarks, providing substantial data efficiency improvements in batch acquisition.
translated by 谷歌翻译
The generalisation performance of a convolutional neural networks (CNN) is majorly predisposed by the quantity, quality, and diversity of the training images. All the training data needs to be annotated in-hand before, in many real-world applications data is easy to acquire but expensive and time-consuming to label. The goal of the Active learning for the task is to draw most informative samples from the unlabeled pool which can used for training after annotation. With total different objective, self-supervised learning which have been gaining meteoric popularity by closing the gap in performance with supervised methods on large computer vision benchmarks. self-supervised learning (SSL) these days have shown to produce low-level representations that are invariant to distortions of the input sample and can encode invariance to artificially created distortions, e.g. rotation, solarization, cropping etc. self-supervised learning (SSL) approaches rely on simpler and more scalable frameworks for learning. In this paper, we unify these two families of approaches from the angle of active learning using self-supervised learning mainfold and propose Deep Active Learning using BarlowTwins(DALBT), an active learning method for all the datasets using combination of classifier trained along with self-supervised loss framework of Barlow Twins to a setting where the model can encode the invariance of artificially created distortions, e.g. rotation, solarization, cropping etc.
translated by 谷歌翻译
Acquiring labeled data is challenging in many machine learning applications with limited budgets. Active learning gives a procedure to select the most informative data points and improve data efficiency by reducing the cost of labeling. The info-max learning principle maximizing mutual information such as BALD has been successful and widely adapted in various active learning applications. However, this pool-based specific objective inherently introduces a redundant selection and further requires a high computational cost for batch selection. In this paper, we design and propose a new uncertainty measure, Balanced Entropy Acquisition (BalEntAcq), which captures the information balance between the uncertainty of underlying softmax probability and the label variable. To do this, we approximate each marginal distribution by Beta distribution. Beta approximation enables us to formulate BalEntAcq as a ratio between an augmented entropy and the marginalized joint entropy. The closed-form expression of BalEntAcq facilitates parallelization by estimating two parameters in each marginal Beta distribution. BalEntAcq is a purely standalone measure without requiring any relational computations with other data points. Nevertheless, BalEntAcq captures a well-diversified selection near the decision boundary with a margin, unlike other existing uncertainty measures such as BALD, Entropy, or Mean Standard Deviation (MeanSD). Finally, we demonstrate that our balanced entropy learning principle with BalEntAcq consistently outperforms well-known linearly scalable active learning methods, including a recently proposed PowerBALD, a simple but diversified version of BALD, by showing experimental results obtained from MNIST, CIFAR-100, SVHN, and TinyImageNet datasets.
translated by 谷歌翻译
标记数据可以是昂贵的任务,因为它通常由域专家手动执行。对于深度学习而言,这是繁琐的,因为它取决于大型标记的数据集。主动学习(AL)是一种范式,旨在通过仅使用二手车型认为最具信息丰富的数据来减少标签努力。在文本分类设置中,在AL上完成了很少的研究,旁边没有涉及最近的最先进的自然语言处理(NLP)模型。在这里,我们介绍了一个实证研究,可以将基于不确定性的基于不确定性的算法与Bert $ _ {base} $相比,作为使用的分类器。我们评估两个NLP分类数据集的算法:斯坦福情绪树木银行和kvk-Front页面。此外,我们探讨了旨在解决不确定性的al的预定问题的启发式;即,它是不可规范的,并且易于选择异常值。此外,我们探讨了查询池大小对al的性能的影响。虽然发现,AL的拟议启发式没有提高AL的表现;我们的结果表明,使用BERT $ _ {Base} $概率使用不确定性的AL。随着查询池大小变大,性能的这种差异可以减少。
translated by 谷歌翻译
We study different aspects of active learning with deep neural networks in a consistent and unified way. i) We investigate incremental and cumulative training modes which specify how the newly labeled data are used for training. ii) We study active learning w.r.t. the model configurations such as the number of epochs and neurons as well as the choice of batch size. iii) We consider in detail the behavior of query strategies and their corresponding informativeness measures and accordingly propose more efficient querying procedures. iv) We perform statistical analyses, e.g., on actively learned classes and test error estimation, that reveal several insights about active learning. v) We investigate how active learning with neural networks can benefit from pseudo-labels as proxies for actual labels.
translated by 谷歌翻译
在主动学习中,训练数据集的大小和复杂性随时间而变化。随着更多要点,由主动学习开始时可用的数据量良好的简单模型可能会受到偏见的影响。可能非常适合于完整数据集的灵活模型可能会在积极学习开始时受到过度装备。我们使用深度不确定性网络(DUNS)来解决这个问题,其中一个BNN变体,其中网络的深度以及其复杂性。我们发现DUNS在几个活跃的学习任务上表现出其他BNN变体。重要的是,我们表明,在DUNs表现最佳的任务上,它们呈现出比基线的显着不太容易。
translated by 谷歌翻译
Active learning enables efficient model training by leveraging interactions between machine learning agents and human annotators. We study and propose a novel framework that formulates batch active learning from the sparse approximation's perspective. Our active learning method aims to find an informative subset from the unlabeled data pool such that the corresponding training loss function approximates its full data pool counterpart. We realize the framework as sparsity-constrained discontinuous optimization problems, which explicitly balance uncertainty and representation for large-scale applications and could be solved by greedy or proximal iterative hard thresholding algorithms. The proposed method can adapt to various settings, including both Bayesian and non-Bayesian neural networks. Numerical experiments show that our work achieves competitive performance across different settings with lower computational complexity.
translated by 谷歌翻译
While deep learning succeeds in a wide range of tasks, it highly depends on the massive collection of annotated data which is expensive and time-consuming. To lower the cost of data annotation, active learning has been proposed to interactively query an oracle to annotate a small proportion of informative samples in an unlabeled dataset. Inspired by the fact that the samples with higher loss are usually more informative to the model than the samples with lower loss, in this paper we present a novel deep active learning approach that queries the oracle for data annotation when the unlabeled sample is believed to incorporate high loss. The core of our approach is a measurement Temporal Output Discrepancy (TOD) that estimates the sample loss by evaluating the discrepancy of outputs given by models at different optimization steps. Our theoretical investigation shows that TOD lower-bounds the accumulated sample loss thus it can be used to select informative unlabeled samples. On basis of TOD, we further develop an effective unlabeled data sampling strategy as well as an unsupervised learning criterion for active learning. Due to the simplicity of TOD, our methods are efficient, flexible, and task-agnostic. Extensive experimental results demonstrate that our approach achieves superior performances than the state-of-the-art active learning methods on image classification and semantic segmentation tasks. In addition, we show that TOD can be utilized to select the best model of potentially the highest testing accuracy from a pool of candidate models.
translated by 谷歌翻译
在Mackay(1992)上展开,我们认为,用于主动学习的基于模式的方法 - 类似的基于模型 - 如秃顶 - 具有基本的缺点:它们未直接解释输入变量的测试时间分布。这可以导致采集策略中的病理,因为模型参数的最大信息是最大信息,可能不是最大地信息,例如,当池集中的数据比最终预测任务的数据更大时,或者池和试验样品的分布不同。为了纠正这一点,我们重新审视了基于最大化关于可能的未来预测的预期信息的收购策略,参考这是预期的预测信息增益(EPIG)。由于EPIG对批量采集不扩展,我们进一步检查了替代策略,秃头和EPIG之间的混合,我们称之为联合预测信息增益(Jepig)。我们考虑在各种数据集中使用贝叶斯神经网络的主动学习,检查池集中分布班下的行为。
translated by 谷歌翻译
虽然深度学习(DL)是渴望数据的,并且通常依靠广泛的标记数据来提供良好的性能,但主动学习(AL)通过从未标记的数据中选择一小部分样本进行标签和培训来降低标签成本。因此,近年来,在有限的标签成本/预算下,深入的积极学习(DAL)是可行的解决方案,可在有限的标签成本/预算下最大化模型性能。尽管已经开发了大量的DAL方法并进行了各种文献综述,但在公平比较设置下对DAL方法的性能评估尚未可用。我们的工作打算填补这一空白。在这项工作中,我们通过重新实现19种引用的DAL方法来构建DAL Toolkit,即Deepal+。我们调查和分类与DAL相关的作品,并构建经常使用的数据集和DAL算法的比较实验。此外,我们探讨了影响DAL功效的一些因素(例如,批处理大小,训练过程中的时期数),这些因素为研究人员设计其DAL实验或执行DAL相关应用程序提供了更好的参考。
translated by 谷歌翻译
收购用于监督学习的标签可能很昂贵。为了提高神经网络回归的样本效率,我们研究了活跃的学习方法,这些方法可以适应地选择未标记的数据进行标记。我们提出了一个框架,用于从(与网络相关的)基础内核,内核转换和选择方法中构造此类方法。我们的框架涵盖了许多基于神经网络的高斯过程近似以及非乘式方法的现有贝叶斯方法。此外,我们建议用草图的有限宽度神经切线核代替常用的最后层特征,并将它们与一种新型的聚类方法结合在一起。为了评估不同的方法,我们引入了一个由15个大型表格回归数据集组成的开源基准。我们所提出的方法的表现优于我们的基准测试上的最新方法,缩放到大数据集,并在不调整网络体系结构或培训代码的情况下开箱即用。我们提供开源代码,包括所有内核,内核转换和选择方法的有效实现,并可用于复制我们的结果。
translated by 谷歌翻译
现代深度学习方法构成了令人难以置信的强大工具,以解决无数的挑战问题。然而,由于深度学习方法作为黑匣子运作,因此与其预测相关的不确定性往往是挑战量化。贝叶斯统计数据提供了一种形式主义来理解和量化与深度神经网络预测相关的不确定性。本教程概述了相关文献和完整的工具集,用于设计,实施,列车,使用和评估贝叶斯神经网络,即使用贝叶斯方法培训的随机人工神经网络。
translated by 谷歌翻译
我们介绍了有监督的对比度积极学习(SCAL),并根据功能相似性(功能IM)和基于主成分分析的基于特征重建误差(FRE)提出有效的活动学习策略,以选择具有不同特征表示的信息性数据示例。我们证明了我们提出的方法可实现最新的准确性,模型校准并减少在图像分类任务上平衡和不平衡数据集的主动学习设置中的采样偏差。我们还评估了模型的鲁棒性,从主动学习环境中不同查询策略得出的分配转移。使用广泛的实验,我们表明我们提出的方法的表现优于高性能密集型方法,从而使平均损坏误差降低了9.9%,在数据集偏移下的预期校准误差降低了7.2%,而AUROC降低了8.9%的AUROC。检测。
translated by 谷歌翻译
主动学习(AL)是一个有希望的ML范式,有可能解析大型未标记数据并有助于降低标记数据可能令人难以置信的域中的注释成本。最近提出的基于神经网络的AL方法使用不同的启发式方法来实现这一目标。在这项研究中,我们证明,在相同的实验环境下,不同类型的AL算法(基于不确定性,基于多样性和委员会)产生了与随机采样基线相比的不一致增长。通过各种实验,控制了随机性来源,我们表明,AL算法实现的性能指标方差可能会导致与先前报道的结果不符的结果。我们还发现,在强烈的正则化下,AL方法在各种实验条件下显示出比随机采样基线的边缘或没有优势。最后,我们以一系列建议进行结论,以了解如何使用新的AL算法评估结果,以确保在实验条件下的变化下结果可再现和健壮。我们共享我们的代码以促进AL评估。我们认为,我们的发现和建议将有助于使用神经网络在AL中进行可重复的研究。我们通过https://github.com/prateekmunjal/torchal开源代码
translated by 谷歌翻译
主动学习通过从未标记的数据集中标记有信息的样本来有效地构建标记的数据集。在现实世界中的活跃学习方案中,考虑到所选样本的多样性至关重要,因为存在许多冗余或高度相似的样本。核心设定方法是基于多样性的有希望的方法,根据样品之间的距离选择不同的样品。然而,与选择最困难的样本的基于不确定性的方法相比,该方法的性能差,神经模型表现出低置信度。在这项工作中,我们通过密度的晶状体分析特征空间,有趣的是,观察到局部稀疏区域往往比密集区域具有更多信息样本。通过我们的分析,我们将核心设定方法赋予密度意识,并提出密度感知的核心集(DACS)。该策略是估计未标记样品的密度,并主要从稀疏区域选择不同的样品。为了减少估计密度的计算瓶颈,我们还基于对区域敏感的散列引入了新的密度近似。实验结果清楚地表明了DAC在分类和回归任务中的功效,并特别表明DAC可以在实际情况下产生最先进的性能。由于DACS微弱地取决于神经体系结构,因此我们提出了一种简单而有效的组合方法,以表明现有方法可以与DAC合并。
translated by 谷歌翻译
主动学习(al)试图通过标记最少的样本来最大限度地提高模型的性能增益。深度学习(DL)是贪婪的数据,需要大量的数据电源来优化大量参数,因此模型了解如何提取高质量功能。近年来,由于互联网技术的快速发展,我们处于信息种类的时代,我们有大量的数据。通过这种方式,DL引起了研究人员的强烈兴趣,并已迅速发展。与DL相比,研究人员对Al的兴趣相对较低。这主要是因为在DL的崛起之前,传统的机器学习需要相对较少的标记样品。因此,早期的Al很难反映其应得的价值。虽然DL在各个领域取得了突破,但大多数这一成功都是由于大量现有注释数据集的宣传。然而,收购大量高质量的注释数据集消耗了很多人力,这在某些领域不允许在需要高专业知识,特别是在语音识别,信息提取,医学图像等领域中, al逐渐受到适当的关注。自然理念是AL是否可用于降低样本注释的成本,同时保留DL的强大学习能力。因此,已经出现了深度主动学习(DAL)。虽然相关的研究非常丰富,但它缺乏对DAL的综合调查。本文要填补这一差距,我们为现有工作提供了正式的分类方法,以及全面和系统的概述。此外,我们还通过申请的角度分析并总结了DAL的发展。最后,我们讨论了DAL中的混乱和问题,为DAL提供了一些可能的发展方向。
translated by 谷歌翻译
The performance of deep neural networks improves with more annotated data. The problem is that the budget for annotation is limited. One solution to this is active learning, where a model asks human to annotate data that it perceived as uncertain. A variety of recent methods have been proposed to apply active learning to deep networks but most of them are either designed specific for their target tasks or computationally inefficient for large networks. In this paper, we propose a novel active learning method that is simple but task-agnostic, and works efficiently with the deep networks. We attach a small parametric module, named "loss prediction module," to a target network, and learn it to predict target losses of unlabeled inputs. Then, this module can suggest data that the target model is likely to produce a wrong prediction. This method is task-agnostic as networks are learned from a single loss regardless of target tasks. We rigorously validate our method through image classification, object detection, and human pose estimation, with the recent network architectures. The results demonstrate that our method consistently outperforms the previous methods over the tasks.
translated by 谷歌翻译