We consider the general problem of learning from labeled and unlabeled data, which is often called semi-supervised learning or transductive inference. A principled approach to semi-supervised learning is to design a classifying function which is sufficiently smooth with respect to the intrinsic structure collectively revealed by known labeled and unlabeled points. We present a simple algorithm to obtain such a smooth solution. Our method yields encouraging experimental results on a number of classification problems and demonstrates effective use of unlabeled data.
translated by 谷歌翻译
An approach to semi-supervised learning is proposed that is based on a Gaussian random field model. Labeled and unlabeled data are represented as vertices in a weighted graph, with edge weights encoding the similarity between instances. The learning problem is then formulated in terms of a Gaussian random field on this graph, where the mean of the field is characterized in terms of harmonic functions, and is efficiently obtained using matrix methods or belief propagation. The resulting learning algorithms have intimate connections with random walks, electric networks, and spectral graph theory. We discuss methods to incorporate class priors and the predictions of classifiers obtained by supervised learning. We also propose a method of parameter learning by entropy minimization, and show the algorithm's ability to perform feature selection. Promising experimental results are presented for synthetic data, digit classification, and text classification tasks.
translated by 谷歌翻译
We propose a family of learning algorithms based on a new form of regularization that allows us to exploit the geometry of the marginal distribution. We focus on a semi-supervised framework that incorporates labeled and unlabeled data in a general-purpose learner. Some transductive graph learning algorithms and standard methods including support vector machines and regularized least squares can be obtained as special cases. We use properties of reproducing kernel Hilbert spaces to prove new Representer theorems that provide theoretical basis for the algorithms. As a result (in contrast to purely graph-based approaches) we obtain a natural out-of-sample extension to novel examples and so are able to handle both transductive and truly semi-supervised settings. We present experimental evidence suggesting that our semi-supervised algorithms are able to use unlabeled data effectively. Finally we have a brief discussion of unsupervised and fully supervised learning within our general framework.
translated by 谷歌翻译
半监督学习得到了研究人员的关注,因为它允许其中利用未标记数据的结构来实现比监督方法更少的标签来实现竞争分类结果。本地和全局一致性(LGC)算法是最着名的基于图形的半监督(GSSL)分类器之一。值得注意的是,其解决方案可以写成已知标签的线性组合。这种线性组合的系数取决于参数$ \ alpha $,在随机步行中达到标记的顶点时,确定随时间的衰减。在这项工作中,我们讨论如何删除标记实例的自我影响可能是有益的,以及它如何与休留次误差。此外,我们建议尽量减少自动分化的休假。在此框架内,我们提出了估计标签可靠性和扩散速率的方法。优化扩散速率以频谱表示更有效地完成。结果表明,标签可靠性方法与强大的L1-NORM方法竞争,删除对角线条目会降低过度的风险,并导致参数选择的合适标准。
translated by 谷歌翻译
We present a semi-supervised learning framework based on graph embeddings. Given a graph between instances, we train an embedding for each instance to jointly predict the class label and the neighborhood context in the graph. We develop both transductive and inductive variants of our method. In the transductive variant of our method, the class labels are determined by both the learned embeddings and input feature vectors, while in the inductive variant, the embeddings are defined as a parametric function of the feature vectors, so predictions can be made on instances not seen during training. On a large and diverse set of benchmark tasks, including text classification, distantly supervised entity extraction, and entity classification, we show improved performance over many of the existing models.
translated by 谷歌翻译
The accuracy of k-nearest neighbor (kNN) classification depends significantly on the metric used to compute distances between different examples. In this paper, we show how to learn a Mahalanobis distance metric for kNN classification from labeled examples. The Mahalanobis metric can equivalently be viewed as a global linear transformation of the input space that precedes kNN classification using Euclidean distances. In our approach, the metric is trained with the goal that the k-nearest neighbors always belong to the same class while examples from different classes are separated by a large margin. As in support vector machines (SVMs), the margin criterion leads to a convex optimization based on the hinge loss. Unlike learning in SVMs, however, our approach requires no modification or extension for problems in multiway (as opposed to binary) classification. In our framework, the Mahalanobis distance metric is obtained as the solution to a semidefinite program. On several data sets of varying size and difficulty, we find that metrics trained in this way lead to significant improvements in kNN classification. Sometimes these results can be further improved by clustering the training examples and learning an individual metric within each cluster. We show how to learn and combine these local metrics in a globally integrated manner.
translated by 谷歌翻译
扩散是分子从较高浓度的区域的运动到较低浓度的区域。它可用于描述数据点之间的交互。在许多机器学习问题包括转导半监督学习和少量学习的问题,标记和未标记的数据点之间的关系是高分类精度的关键组件。在本文中,由对流扩散颂歌的启发,我们提出了一种新颖的扩散剩余网络(Diff-Reset),将扩散机制引入内部的神经网络中。在结构化数据假设下,证明扩散机构可以提高距离直径比,从而提高了阶级间点间的可分离性,并减少了局部分类点之间的距离。该特性可以通过用于构建可分离超平面的剩余网络来轻松采用。各种数据集中的半监控图节点分类和几次拍摄图像分类的广泛实验验证了所提出的扩散机制的有效性。
translated by 谷歌翻译
Semi-supervised learning is becoming increasingly important because it can combine data carefully labeled by humans with abundant unlabeled data to train deep neural networks. Classic methods on semi-supervised learning that have focused on transductive learning have not been fully exploited in the inductive framework followed by modern deep learning. The same holds for the manifold assumption-that similar examples should get the same prediction. In this work, we employ a transductive label propagation method that is based on the manifold assumption to make predictions on the entire dataset and use these predictions to generate pseudo-labels for the unlabeled data and train a deep neural network. At the core of the transductive method lies a nearest neighbor graph of the dataset that we create based on the embeddings of the same network. Therefore our learning process iterates between these two steps. We improve performance on several datasets especially in the few labels regime and show that our work is complementary to current state of the art.
translated by 谷歌翻译
Many interesting problems in machine learning are being revisited with new deep learning tools. For graph-based semisupervised learning, a recent important development is graph convolutional networks (GCNs), which nicely integrate local vertex features and graph topology in the convolutional layers. Although the GCN model compares favorably with other state-of-the-art methods, its mechanisms are not clear and it still requires considerable amount of labeled data for validation and model selection. In this paper, we develop deeper insights into the GCN model and address its fundamental limits. First, we show that the graph convolution of the GCN model is actually a special form of Laplacian smoothing, which is the key reason why GCNs work, but it also brings potential concerns of oversmoothing with many convolutional layers. Second, to overcome the limits of the GCN model with shallow architectures, we propose both co-training and self-training approaches to train GCNs. Our approaches significantly improve GCNs in learning with very few labels, and exempt them from requiring additional labels for validation. Extensive experiments on benchmarks have verified our theory and proposals.
translated by 谷歌翻译
近几十年来,科学和工程的可用数据数量的重大增长彻底改变了。然而,尽管现在收集和存储数据的空前很容易,但通过补充每个功能的标签来标记数据仍然是具有挑战性的。标签过程需要专家知识或乏味且耗时的说明任务包括用诊断X射线标记X射线,具有蛋白质类型的蛋白质序列,其主题的文本,通过其情感推文或视频通过其类型的视频。在这些和许多其他示例中,由于成本和时间限制,只能手动标记一些功能。我们如何才能最好地将标签信息从少数昂贵的标签功能到大量未标记的标签信息传播?这是半监督学习(SSL)提出的问题。本文概述了基于图的贝叶斯SSL的最新基础发展,这是一种使用功能之间的相似性的标签传播概率框架。 SSL是一个活跃的研究领域,对现有文献的彻底回顾超出了本文的范围。我们的重点将放在我们自己的研究中得出的主题,这些主题说明了对基于图的贝叶斯SSL的统计准确性和计算效率进行严格研究的广泛数学工具和思想。
translated by 谷歌翻译
子空间聚类是将大约位于几个低维子空间的数据样本集合集合的经典问题。此问题的当前最新方法基于自我表达模型,该模型表示样品是其他样品的线性组合。但是,这些方法需要足够广泛的样品才能准确表示,这在许多应用中可能不一定是可以访问的。在本文中,我们阐明了这个常见的问题,并认为每个子空间中的数据分布在自我表达模型的成功中起着至关重要的作用。我们提出的解决此问题的解决方案是由数据扩展在深神经网络的概括力中的核心作用引起的。我们为无监督和半监督的设置提出了两个子空间聚类框架,这些框架使用增强样品作为扩大词典来提高自我表达表示的质量。我们提出了一种使用一些标记的样品进行半监督问题的自动增强策略,该问题取决于数据样本位于多个线性子空间的联合以下事实。实验结果证实了数据增强的有效性,因为它显着提高了一般自我表达模型的性能。
translated by 谷歌翻译
这篇综述的目的是将读者介绍到图表内,以将其应用于化学信息学中的分类问题。图内核是使我们能够推断分子的化学特性的功能,可以帮助您完成诸如寻找适合药物设计的化合物等任务。内核方法的使用只是一种特殊的两种方式量化了图之间的相似性。我们将讨论限制在这种方法上,尽管近年来已经出现了流行的替代方法,但最著名的是图形神经网络。
translated by 谷歌翻译
基于相似性的聚类方法根据数据之间的成对相似性将数据分离为簇,而成对相似性对于它们的性能至关重要。在本文中,我们通过判别性相似性(CDS)}提出了{\ em聚类,这是一种新的方法,可以学习数据群集的区分性相似性。 CD从每个数据分区学习一个无监督的基于相似性的分类器,并通过最大程度地减少与数据分区关联的学习分类器的概括错误来搜索数据的最佳分区。通过通过Rademacher复杂性进行的概括分析,基于无监督相似性的分类器的概括误差表示为来自不同类别的数据之间的判别性相似性之和。事实证明,派生的判别性相似性也可以通过构成内核密度分类的综合平方误差引起。为了评估提出的判别性相似性的性能,我们提出了一种使用内核作为相似性函数的新聚类方法,即通过无监督的内核分类(CDSK)CD,其有效性通过实验结果证明。
translated by 谷歌翻译
In recent years, spectral clustering has become one of the most popular modern clustering algorithms. It is simple to implement, can be solved efficiently by standard linear algebra software, and very often outperforms traditional clustering algorithms such as the k-means algorithm. On the first glance spectral clustering appears slightly mysterious, and it is not obvious to see why it works at all and what it really does. The goal of this tutorial is to give some intuition on those questions. We describe different graph Laplacians and their basic properties, present the most common spectral clustering algorithms, and derive those algorithms from scratch by several different approaches. Advantages and disadvantages of the different spectral clustering algorithms are discussed.
translated by 谷歌翻译
We present a new technique called "t-SNE" that visualizes high-dimensional data by giving each datapoint a location in a two or three-dimensional map. The technique is a variation of Stochastic Neighbor Embedding (Hinton and Roweis, 2002) that is much easier to optimize, and produces significantly better visualizations by reducing the tendency to crowd points together in the center of the map. t-SNE is better than existing techniques at creating a single map that reveals structure at many different scales. This is particularly important for high-dimensional data that lie on several different, but related, low-dimensional manifolds, such as images of objects from multiple classes seen from multiple viewpoints. For visualizing the structure of very large data sets, we show how t-SNE can use random walks on neighborhood graphs to allow the implicit structure of all of the data to influence the way in which a subset of the data is displayed. We illustrate the performance of t-SNE on a wide variety of data sets and compare it with many other non-parametric visualization techniques, including Sammon mapping, Isomap, and Locally Linear Embedding. The visualizations produced by t-SNE are significantly better than those produced by the other techniques on almost all of the data sets.
translated by 谷歌翻译
We consider the semi-supervised learning problem, where a decision rule is to be learned from labeled and unlabeled data. In this framework, we motivate minimum entropy regularization, which enables to incorporate unlabeled data in the standard supervised learning. Our approach includes other approaches to the semi-supervised problem as particular or limiting cases. A series of experiments illustrates that the proposed solution benefits from unlabeled data. The method challenges mixture models when the data are sampled from the distribution class spanned by the generative model. The performances are definitely in favor of minimum entropy regularization when generative models are misspecified, and the weighting of unlabeled data provides robustness to the violation of the "cluster assumption". Finally, we also illustrate that the method can also be far superior to manifold learning in high dimension spaces.
translated by 谷歌翻译
域适应性是现代机器学习中的一种流行范式,旨在解决培训或验证数据集之间具有用于学习和测试分类器(源域)和潜在的大型未标记数据集的培训或验证数据集之间的分歧问题,其中利用了模型(目标域)(目标域)(目标域) 。任务是找到源数据集的源和目标数据集的这种常见表示,其中源数据集提供了培训的信息,因此可以最大程度地减少来源和目标之间的差异。目前,最流行的领域适应性解决方案是基于训练神经网络,这些神经网络结合了分类和对抗性学习模块,这些模块是饥饿的,通常很难训练。我们提出了一种称为域适应性主成分分析(DAPCA)的方法,该方法发现线性减少的数据表示有助于解决域适应任务。 DAPCA基于数据点对之间引入正权重,并概括了主成分分析的监督扩展。 DAPCA代表一种迭代算法,因此在每次迭代中都解决了一个简单的二次优化问题。保证算法的收敛性,并且在实践中的迭代次数很少。我们验证了先前提出的用于解决域适应任务的基准的建议算法,还显示了在生物医学应用中对单细胞法数据集进行分析中使用DAPCA的好处。总体而言,考虑到源域和目标域之间可能的差异,DAPCA可以作为许多机器学习应用程序中有用的预处理步骤。
translated by 谷歌翻译
大多数现有的半监督基于图的聚类方法通过完善亲和力矩阵或直接限制数据点的低维表示来利用监督信息。亲和力矩阵代表图形结构,对于半监督基于图的聚类的性能至关重要。但是,现有方法采用静态亲和力矩阵来学习数据点的低维表示,并且在学习过程中不会优化亲和力矩阵。在本文中,我们提出了一种新型的动态图结构学习方法,用于半监督聚类。在这种方法中,我们通过利用给定的成对约束来同时优化数据点的亲和力矩阵和低维表示。此外,我们提出了一种交替的最小化方法,并通过可靠的收敛来解决提出的非凸模型。在迭代过程中,我们的方法周期性地更新数据点的低维表示并完善了亲和力矩阵,从而导致动态亲和力矩阵(图结构)。具体而言,为了更新亲和力矩阵,我们强制使用具有明显不同的低维表示的数据点具有相关值为0。点。在不同设置下的八个基准数据集上的实验结果显示了所提出方法的优势。
translated by 谷歌翻译
社会科学家经常将文本文档分类为使用结果标签作为实证研究的结果或预测指标。自动化文本分类已成为标准工具,因为它需要较少的人体编码。但是,学者们仍然需要许多人类标记的文件来培训自动分类器。为了降低标签成本,我们提出了一种新的文本分类算法,将概率模型与主动学习结合在一起。概率模型同时使用标记和未标记的数据,而主动学习集中在难以分类的文件上标记工作。我们的验证研究表明,我们的算法的分类性能与最先进的方法相当,而计算成本的一部分。此外,我们复制了两篇最近发表的文章,并得出相同的实质性结论,其中仅占这些研究中使用的原始标记数据的一小部分。我们提供ActiveText,一种开源软件来实现我们的方法。
translated by 谷歌翻译
We propose a new regularization method based on virtual adversarial loss: a new measure of local smoothness of the conditional label distribution given input. Virtual adversarial loss is defined as the robustness of the conditional label distribution around each input data point against local perturbation. Unlike adversarial training, our method defines the adversarial direction without label information and is hence applicable to semi-supervised learning. Because the directions in which we smooth the model are only "virtually" adversarial, we call our method virtual adversarial training (VAT). The computational cost of VAT is relatively low. For neural networks, the approximated gradient of virtual adversarial loss can be computed with no more than two pairs of forward-and back-propagations. In our experiments, we applied VAT to supervised and semi-supervised learning tasks on multiple benchmark datasets. With a simple enhancement of the algorithm based on the entropy minimization principle, our VAT achieves state-of-the-art performance for semi-supervised learning tasks on SVHN and CIFAR-10.
translated by 谷歌翻译