这项工作旨在解决学习多元化陈述的长期问题。为此,我们将信息理论论点与随机竞争的激活,即随机本地获奖者 - 所有(LWTA)单位结合起来。在这种情况下,我们致力于表示学习的传统深层架构,依赖于非线性激活;相反,我们用本地和随机竞争的线性单位组替换它们。在此设置中,每个网络层产生稀疏输出,由组织成竞争对手块之间的竞争的结果确定。我们采用竞争机制的随机论据,执行后部采样以确定每个块的获胜者。我们进一步赋予考虑的网络能够推断网络的子部分,这对于在手头上建模数据至关重要;我们将适当的粘性前锋施加到此目的。为了进一步丰富新兴陈述的信息,我们求助于信息 - 理论原则,即信息竞争过程(ICP)。然后,所有组件在随机变分贝叶斯框架下捆绑在一起进行推理。我们对我们的方法进行了彻底的实验研究,使用基准数据集进行了图像分类。正如我们在实验表明的那样,所产生的网络产生了显着的歧视性学习能力。此外,介绍的范例允许新出现的中间网络表示的原则调查机制。
translated by 谷歌翻译
这项工作探讨了随机竞争的激活效力,即随机本地获奖者 - 所有(LWTA),反对强大的(梯度)的白盒和黑匣子对抗攻击;我们特别关注对抗性训练环境。在我们的工作中,我们用包括当地和随机竞争的线性单元的块替换基于常规的基于Relu的非线性。每个网络层的输出现在产生稀疏输出,具体取决于每个块中的获胜者采样的结果。我们依靠变分的贝叶斯框架进行培训和推理;我们纳入了常规的基于PGD的对抗的对抗性培训论证,以增加整体对抗性鲁棒性。正如我们在实验表明,所产生的网络产生最先进的稳健性,这对于强大的对抗性攻击,同时保留了良性案例中的非常高的分类率。
translated by 谷歌翻译
这项工作通过考虑具有随机局部获奖者(LWTA)激活的深层网络来解决元学习(ML)。这种类型的网络单元导致每个模型层的稀疏表示形式,因为单元被组织成仅一个单元生成非零输出的块。引入单元的主要操作原理依赖于随机原理,因为网络对竞争单位进行后验采样以选择获胜者。因此,与当前标准的确定性表示范式相反,提出的网络是明确设计的,以提取稀疏随机性的输入数据表示。我们的方法在几乎没有图像分类和回归实验上产生了最新的预测精度,并在主动学习设置上降低了预测误差;这些改进的计算成本大大降低。
translated by 谷歌翻译
This work investigates unsupervised learning of representations by maximizing mutual information between an input and the output of a deep neural network encoder. Importantly, we show that structure matters: incorporating knowledge about locality in the input into the objective can significantly improve a representation's suitability for downstream tasks. We further control characteristics of the representation by matching to a prior distribution adversarially. Our method, which we call Deep InfoMax (DIM), outperforms a number of popular unsupervised learning methods and compares favorably with fully-supervised learning on several classification tasks in with some standard architectures. DIM opens new avenues for unsupervised learning of representations and is an important step towards flexible formulations of representation learning objectives for specific end-goals.
translated by 谷歌翻译
While machine learning is traditionally a resource intensive task, embedded systems, autonomous navigation, and the vision of the Internet of Things fuel the interest in resource-efficient approaches. These approaches aim for a carefully chosen trade-off between performance and resource consumption in terms of computation and energy. The development of such approaches is among the major challenges in current machine learning research and key to ensure a smooth transition of machine learning technology from a scientific environment with virtually unlimited computing resources into everyday's applications. In this article, we provide an overview of the current state of the art of machine learning techniques facilitating these real-world requirements. In particular, we focus on deep neural networks (DNNs), the predominant machine learning models of the past decade. We give a comprehensive overview of the vast literature that can be mainly split into three non-mutually exclusive categories: (i) quantized neural networks, (ii) network pruning, and (iii) structural efficiency. These techniques can be applied during training or as post-processing, and they are widely used to reduce the computational demands in terms of memory footprint, inference speed, and energy efficiency. We also briefly discuss different concepts of embedded hardware for DNNs and their compatibility with machine learning techniques as well as potential for energy and latency reduction. We substantiate our discussion with experiments on well-known benchmark datasets using compression techniques (quantization, pruning) for a set of resource-constrained embedded systems, such as CPUs, GPUs and FPGAs. The obtained results highlight the difficulty of finding good trade-offs between resource efficiency and predictive performance.
translated by 谷歌翻译
提出了一种新的双峰生成模型,用于生成条件样品和关节样品,并采用学习简洁的瓶颈表示的训练方法。所提出的模型被称为变异Wyner模型,是基于网络信息理论中的两个经典问题(分布式仿真和信道综合)设计的,其中Wyner的共同信息是对公共表示简洁性的基本限制。该模型是通过最大程度地减少对称的kullback的训练 - 差异 - 变异分布和模型分布之间具有正则化项,用于常见信息,重建一致性和潜在空间匹配项,该术语是通过对逆密度比率估计技术进行的。通过与合成和现实世界数据集的联合和有条件生成的实验以及具有挑战性的零照片图像检索任务,证明了所提出的方法的实用性。
translated by 谷歌翻译
The success of machine learning algorithms generally depends on data representation, and we hypothesize that this is because different representations can entangle and hide more or less the different explanatory factors of variation behind the data. Although specific domain knowledge can be used to help design representations, learning with generic priors can also be used, and the quest for AI is motivating the design of more powerful representation-learning algorithms implementing such priors. This paper reviews recent work in the area of unsupervised feature learning and deep learning, covering advances in probabilistic models, auto-encoders, manifold learning, and deep networks. This motivates longer-term unanswered questions about the appropriate objectives for learning good representations, for computing representations (i.e., inference), and the geometrical connections between representation learning, density estimation and manifold learning.
translated by 谷歌翻译
We present a variational approximation to the information bottleneck of Tishby et al. (1999). This variational approach allows us to parameterize the information bottleneck model using a neural network and leverage the reparameterization trick for efficient training. We call this method "Deep Variational Information Bottleneck", or Deep VIB. We show that models trained with the VIB objective outperform those that are trained with other forms of regularization, in terms of generalization performance and robustness to adversarial attack.
translated by 谷歌翻译
Estimating and optimizing Mutual Information (MI) is core to many problems in machine learning; however, bounding MI in high dimensions is challenging. To establish tractable and scalable objectives, recent work has turned to variational bounds parameterized by neural networks, but the relationships and tradeoffs between these bounds remains unclear. In this work, we unify these recent developments in a single framework. We find that the existing variational lower bounds degrade when the MI is large, exhibiting either high bias or high variance. To address this problem, we introduce a continuum of lower bounds that encompasses previous bounds and flexibly trades off bias and variance. On high-dimensional, controlled problems, we empirically characterize the bias and variance of the bounds and their gradients and demonstrate the effectiveness of our new bounds for estimation and representation learning.
translated by 谷歌翻译
瓶颈问题是一系列重要的优化问题,最近在机器学习和信息理论领域引起了人们的关注。它们被广泛用于生成模型,公平的机器学习算法,对隐私保护机制的设计,并在各种多用户通信问题中作为信息理论性能界限出现。在这项工作中,我们提出了一个普通的优化问题家族,称为复杂性 - 裸露的瓶颈(俱乐部)模型,该模型(i)提供了一个统一的理论框架,该框架将大多数最先进的文献推广到信息理论隐私模型(ii)建立了对流行的生成和判别模型的新解释,(iii)构建了生成压缩模型的新见解,并且(iv)可以在公平的生成模型中使用。我们首先将俱乐部模型作为复杂性约束的隐私性优化问题。然后,我们将其与密切相关的瓶颈问题(即信息瓶颈(IB),隐私渠道(PF),确定性IB(DIB),条件熵瓶颈(CEB)和有条件的PF(CPF)连接。我们表明,俱乐部模型概括了所有这些问题以及大多数其他信息理论隐私模型。然后,我们通过使用神经网络来参数化相关信息数量的变异近似来构建深层俱乐部(DVCLUB)模型。在这些信息数量的基础上,我们提出了监督和无监督的DVClub模型的统一目标。然后,我们在无监督的设置中利用DVClub模型,然后将其与最先进的生成模型(例如变异自动编码器(VAE),生成对抗网络(GAN)以及Wasserstein Gan(WGAN)连接起来,Wasserstein自动编码器(WAE)和对抗性自动编码器(AAE)通过最佳运输(OT)问题模型。然后,我们证明DVCLUB模型也可以用于公平表示学习问题,其目标是在机器学习模型的训练阶段减轻不希望的偏差。我们对彩色命名和Celeba数据集进行了广泛的定量实验,并提供了公共实施,以评估和分析俱乐部模型。
translated by 谷歌翻译
在这项工作中,我们提出了相互信息最大化知识蒸馏(MIMKD)。我们的方法使用对比目标来同时估计,并最大化教师和学生网络之间的本地和全球特征表示的相互信息的下限。我们通过广泛的实验证明,这可以通过将知识从更加性能但计算昂贵的模型转移来改善低容量模型的性能。这可用于产生更好的模型,可以在具有低计算资源的设备上运行。我们的方法灵活,我们可以将具有任意网络架构的教师蒸馏到任意学生网络。我们的经验结果表明,MIMKD优于各种学生教师对的竞争方法,具有不同的架构,以及学生网络的容量极低。我们能够通过从Reset-50蒸馏出来的知识,从基线精度为Shufflenetv2获得74.55%的精度。在Imagenet上,我们使用Reset-34教师网络将Reset-18网络从68.88%提高到70.32%的准确度(1.44%+)。
translated by 谷歌翻译
嵌套辍学是辍学操作的变体,能够根据训练期间的预定义重要性订购网络参数或功能。它已被探索:I。构造嵌套网络:嵌套网是神经网络,可以在测试时间(例如基于计算约束)中立即调整架构的架构。嵌套的辍学者隐含地对网络参数进行排名,生成一组子网络,从而使任何较小的子网络构成较大的子网络的基础。 ii。学习排序表示:应用于生成模型的潜在表示(例如自动编码器)对特征进行排名,从而在尺寸上执行密集表示的明确顺序。但是,在整个训练过程中,辍学率是固定为高参数的。对于嵌套网,当删除网络参数时,性能衰减在人类指定的轨迹中而不是从数据中学到的轨迹中。对于生成模型,特征的重要性被指定为恒定向量,从而限制了表示学习的灵活性。为了解决该问题,我们专注于嵌套辍学的概率对应物。我们提出了一个嵌套掉落(VND)操作,该操作以低成本绘制多维有序掩码的样品,为嵌套掉落的参数提供了有用的梯度。基于这种方法,我们设计了一个贝叶斯嵌套的神经网络,以了解参数分布的顺序知识。我们在不同的生成模型下进一步利用VND来学习有序的潜在分布。在实验中,我们表明所提出的方法在分类任务中的准确性,校准和室外检测方面优于嵌套网络。它还在数据生成任务上胜过相关的生成模型。
translated by 谷歌翻译
We propose a simultaneous learning and pruning algorithm capable of identifying and eliminating irrelevant structures in a neural network during the early stages of training. Thus, the computational cost of subsequent training iterations, besides that of inference, is considerably reduced. Our method, based on variational inference principles using Gaussian scale mixture priors on neural network weights, learns the variational posterior distribution of Bernoulli random variables multiplying the units/filters similarly to adaptive dropout. Our algorithm, ensures that the Bernoulli parameters practically converge to either 0 or 1, establishing a deterministic final network. We analytically derive a novel hyper-prior distribution over the prior parameters that is crucial for their optimal selection and leads to consistent pruning levels and prediction accuracy regardless of weight initialization or the size of the starting network. We prove the convergence properties of our algorithm establishing theoretical and practical pruning conditions. We evaluate the proposed algorithm on the MNIST and CIFAR-10 data sets and the commonly used fully connected and convolutional LeNet and VGG16 architectures. The simulations show that our method achieves pruning levels on par with state-of the-art methods for structured pruning, while maintaining better test-accuracy and more importantly in a manner robust with respect to network initialization and initial size.
translated by 谷歌翻译
A grand goal in deep learning research is to learn representations capable of generalizing across distribution shifts. Disentanglement is one promising direction aimed at aligning a models representations with the underlying factors generating the data (e.g. color or background). Existing disentanglement methods, however, rely on an often unrealistic assumption: that factors are statistically independent. In reality, factors (like object color and shape) are correlated. To address this limitation, we propose a relaxed disentanglement criterion - the Hausdorff Factorized Support (HFS) criterion - that encourages a factorized support, rather than a factorial distribution, by minimizing a Hausdorff distance. This allows for arbitrary distributions of the factors over their support, including correlations between them. We show that the use of HFS consistently facilitates disentanglement and recovery of ground-truth factors across a variety of correlation settings and benchmarks, even under severe training correlations and correlation shifts, with in parts over +60% in relative improvement over existing disentanglement methods. In addition, we find that leveraging HFS for representation learning can even facilitate transfer to downstream tasks such as classification under distribution shifts. We hope our original approach and positive empirical results inspire further progress on the open problem of robust generalization.
translated by 谷歌翻译
过度装备数据是与生成模型的众所周知的现象,其模拟太紧密(或准确)的特定数据实例,因此可能无法可靠地预测未来的观察。在实践中,这种行为是由各种 - 有时启发式的 - 正则化技术控制,这是通过将上限发展到泛化误差的激励。在这项工作中,我们研究依赖于在跨熵损失的随机编码上依赖于随机编码的泛化误差,这通常用于深度学习进行分类问题。我们导出界定误差,示出存在根据编码分布随机生成的输入特征和潜在空间中的相应表示之间的相互信息界定的制度。我们的界限提供了对所谓的各种变分类分类中的概括的信息理解,其由Kullback-Leibler(KL)发散项进行规则化。这些结果为变分推理方法提供了高度流行的KL术语的理论理由,这些方法已经认识到作为正则化罚款有效行动。我们进一步观察了具有良好研究概念的连接,例如变形自动化器,信息丢失,信息瓶颈和Boltzmann机器。最后,我们对Mnist和CiFar数据集进行了数值实验,并表明相互信息确实高度代表了泛化误差的行为。
translated by 谷歌翻译
基于信息瓶颈(IB)的多视图学习提供了一种信息理论原则,用于寻找异质数据描述中包含的共享信息。但是,它的巨大成功通常归因于估计网络变得复杂时棘手的多元互助信息。此外,表示折衷的表示,{\ it},预测压缩和足够的一致性权衡,使IB难以同时满足这两个要求。在本文中,我们设计了几种变分信息瓶颈,以利用两个关键特征({\ it,即},充分性和一致性)用于多视图表示学习。具体而言,我们提出了一种多视图变量蒸馏(MV $^2 $ d)策略,以通过给出观点的任意输入,但没有明确估算它,从而为拟合MI提供了可扩展,灵活和分析的解决方案。在严格的理论保证下,我们的方法使IB能够掌握观测和语义标签之间的内在相关性,从而自然产生预测性和紧凑的表示。同样,我们的信息理论约束可以通过消除任务 - 求核和特定信息的信息来有效地中和对异质数据的敏感性,从而阻止在多种视图情况下两种权衡。为了验证理论上的策略,我们将方法应用于三种不同应用下的各种基准。广泛的定量和定性实验证明了我们对最新方法的方法的有效性。
translated by 谷歌翻译
图形神经网络(GNNS)在广泛的应用方面显示了有希望的结果。 GNN的大多数实证研究直接将观察图视为输入,假设观察到的结构完美地描绘了节点之间的准确性和完全关系。然而,现实世界中的图形是不可避免的或不完整的,甚至可以加剧图表表示的质量。在这项工作中,我们提出了一种新颖的变分信息瓶颈引导图形结构学习框架,即vib-gsl,在信息理论的角度下。 VIB-GSL推进了图形结构学习的信息瓶颈(IB)原则,为挖掘潜在的任务关系提供了更优雅且普遍的框架。 VIB-GSL了解一个信息和压缩图形结构,用于蒸馏出特定的下游任务的可操作信息。 VIB-GSL为不规则图数据推导了变化近似,以形成促进训练稳定性的易切换IB目标函数。广泛的实验结果表明,VIB-GSL的卓越有效性和鲁棒性。
translated by 谷歌翻译
我们提出了一个通过信息瓶颈约束来学习CAPSNET的学习框架的框架,该框架将信息提炼成紧凑的形式,并激励学习可解释的分解化胶囊。在我们的$ \ beta $ -capsnet框架中,使用超参数$ \ beta $用于权衡解开和其他任务,使用变异推理将信息瓶颈术语转换为kl divergence,以近似为约束胶囊。为了进行监督学习,使用类独立掩码矢量来理解合成的变化类型,无论图像类别类别,我们通过调整参数$ \ beta $来进行大量的定量和定性实验,以找出分离,重建和细节之间的关系表现。此外,提出了无监督的$ \ beta $ -capsnet和相应的动态路由算法,以学习范围的方式,以一种无监督的方式学习解散胶囊,广泛的经验评估表明我们的$ \ beta $ -CAPPAPSNET可实现的是先进的分离性截止性性能比较在监督和无监督场景中的几个复杂数据集上的CAPSNET和各种基线。
translated by 谷歌翻译
学习概括不见于没有人类监督的有效视觉表现是一个基本问题,以便将机器学习施加到各种各样的任务。最近,分别是SIMCLR和BYOL的两个自我监督方法,对比学习和潜在自动启动的家庭取得了重大进展。在这项工作中,我们假设向这些算法添加显式信息压缩产生更好,更强大的表示。我们通过开发与条件熵瓶颈(CEB)目标兼容的SIMCLR和BYOL配方来验证这一点,允许我们衡量并控制学习的表示中的压缩量,并观察它们对下游任务的影响。此外,我们探讨了Lipschitz连续性和压缩之间的关系,显示了我们学习的编码器的嘴唇峰常数上的易触摸下限。由于Lipschitz连续性与稳健性密切相关,这为什么压缩模型更加强大提供了新的解释。我们的实验证实,向SIMCLR和BYOL添加压缩显着提高了线性评估精度和模型鲁棒性,跨各种域移位。特别是,Byol的压缩版本与Reset-50的ImageNet上的76.0%的线性评估精度达到了76.0%的直线评价精度,并使用Reset-50 2x的78.8%。
translated by 谷歌翻译
Transferring knowledge from a teacher neural network pretrained on the same or a similar task to a student neural network can significantly improve the performance of the student neural network. Existing knowledge transfer approaches match the activations or the corresponding handcrafted features of the teacher and the student networks. We propose an information-theoretic framework for knowledge transfer which formulates knowledge transfer as maximizing the mutual information between the teacher and the student networks. We compare our method with existing knowledge transfer methods on both knowledge distillation and transfer learning tasks and show that our method consistently outperforms existing methods. We further demonstrate the strength of our method on knowledge transfer across heterogeneous network architectures by transferring knowledge from a convolutional neural network (CNN) to a multi-layer perceptron (MLP) on CIFAR-10. The resulting MLP significantly outperforms the-state-of-the-art methods and it achieves similar performance to the CNN with a single convolutional layer. * Contributed during an internship at Amazon.
translated by 谷歌翻译