深度学习中的许多任务涉及优化\ emph {输入}到网络以最小化或最大化一些目标;示例包括在生成模型中的潜在空间上的优化,以匹配目标图像,或者对其进行对接扰动的前进扰动以恶化分类器性能。然而,执行这种优化是传统上的昂贵,因为它涉及完全向前和向后通过网络,每个梯度步骤。在单独的工作中,最近的研究线程已经开发了深度均衡(DEQ)模型,一类放弃传统网络深度的模型,而是通过找到单个非线性层的固定点来计算网络的输出。在本文中,我们表明这两个设置之间存在自然协同作用。虽然,对于这些优化问题的天真使用DEQs是昂贵的(由于计算每个渐变步骤所需的时间),我们可以利用基于梯度的优化可以\ emph {本身}作为一个固定点来利用这一事实迭代基本上提高整体速度。也就是说,我们\ EMPH {同时解决了DEQ固定点\ EMPH {和}在网络输入上优化,所有内容都在单个“增强”的DEQ模型中,共同编码原始网络和优化过程。实际上,程序足够快,使我们允许我们有效地\以传统地依赖于“内在”优化循环的任务的{Train} DEQ模型。我们在各种任务中展示了这种策略,例如培训生成模型,同时优化潜在代码,培训模型,以实现逆问题,如去噪,普及训练和基于梯度的元学习。
translated by 谷歌翻译
近年来,深度学习在图像重建方面取得了显着的经验成功。这已经促进了对关键用例中数据驱动方法的正确性和可靠性的精确表征的持续追求,例如在医学成像中。尽管基于深度学习的方法具有出色的性能和功效,但对其稳定性或缺乏稳定性的关注以及严重的实际含义。近年来,已经取得了重大进展,以揭示数据驱动的图像恢复方法的内部运作,从而挑战了其广泛认为的黑盒本质。在本文中,我们将为数据驱动的图像重建指定相关的融合概念,该概念将构成具有数学上严格重建保证的学习方法调查的基础。强调的一个例子是ICNN的作用,提供了将深度学习的力量与经典凸正则化理论相结合的可能性,用于设计被证明是融合的方法。这篇调查文章旨在通过提供对数据驱动的图像重建方法以及从业人员的理解,旨在通过提供可访问的融合概念的描述,并通过将一些现有的经验实践放在可靠的数学上,来推进我们对数据驱动图像重建方法的理解以及从业人员的了解。基础。
translated by 谷歌翻译
深度学习的一个有前景的趋势取代了具有隐式网络的传统馈送网络。与传统网络不同,隐式网络解决了一个固定点方程来计算推断。解决固定点的复杂性变化,具体取决于提供的数据和误差容差。重要的是,可以通过与前馈网络的STARK对比度训练隐式网络,其内存需求与深度线性缩放。但是,没有免费的午餐 - 通过隐式网络锻造BackPropagation通常需要解决从隐式功能定理引起的昂贵的Jacobian等方程。我们提出了无雅各比的BackPropagation(JFB),一种固定内存方法,这些方法旨在解决基于雅略族裔的基于雅代族人的方程。 JFB使隐式网络更快地培训,并明显更容易实现,而不会牺牲测试精度。我们的实验表明,使用JFB培训的隐式网络与给出相同数量的参数的前馈网络和现有的隐式网络具有竞争力。
translated by 谷歌翻译
本文介绍了OptNet,该网络架构集成了优化问题(这里,专门以二次程序的形式),作为较大端到端可训练的深网络中的单个层。这些层在隐藏状态之间编码约束和复杂依赖性,传统的卷积和完全连接的层通常无法捕获。我们探索这种架构的基础:我们展示了如何使用敏感性分析,彼得优化和隐式差分的技术如何通过这些层和相对于层参数精确地区分;我们为这些层开发了一种高效的解算器,用于利用基于GPU的基于GPU的批处理在原始 - 双内部点法中解决,并且在求解的顶部几乎没有额外的成本提供了反向衰减梯度;我们突出了这些方法在几个问题中的应用。在一个值得注意的示例中,该方法学习仅在输入和输出游戏中播放Mini-sudoku(4x4),没有关于游戏规则的a-priori信息;这突出了OptNet比其他神经架构更好地学习硬限制的能力。
translated by 谷歌翻译
在这项工作中,我们为生成自动编码器的变异培训提供了确切的可能性替代方法。我们表明,可以使用可逆层来构建VAE风格的自动编码器,该层提供了可拖动的精确可能性,而无需任何正则化项。这是在选择编码器,解码器和先前体系结构的全部自由的同时实现的,这使我们的方法成为培训现有VAE和VAE风格模型的替换。我们将结果模型称为流中的自动编码器(AEF),因为编码器,解码器和先验被定义为整体可逆体系结构的单个层。我们表明,在对数可能,样本质量和降低性能的方面,该方法的性能比结构上等效的VAE高得多。从广义上讲,这项工作的主要野心是在共同的可逆性和确切的最大可能性的共同框架下缩小正常化流量和自动编码器文献之间的差距。
translated by 谷歌翻译
We present a new approach to modeling sequential data: the deep equilibrium model (DEQ). Motivated by an observation that the hidden layers of many existing deep sequence models converge towards some fixed point, we propose the DEQ approach that directly finds these equilibrium points via root-finding. Such a method is equivalent to running an infinite depth (weight-tied) feedforward network, but has the notable advantage that we can analytically backpropagate through the equilibrium point using implicit differentiation. Using this approach, training and prediction in these networks require only constant memory, regardless of the effective "depth" of the network. We demonstrate how DEQs can be applied to two state-of-the-art deep sequence models: self-attention transformers and trellis networks. On large-scale language modeling tasks, such as the WikiText-103 benchmark, we show that DEQs 1) often improve performance over these stateof-the-art models (for similar parameter counts); 2) have similar computational requirements to existing models; and 3) vastly reduce memory consumption (often the bottleneck for training large sequence models), demonstrating an up-to 88% memory reduction in our experiments. The code is available at https://github. com/locuslab/deq.
translated by 谷歌翻译
本文侧重于培训无限层的隐含模型。具体而言,以前的作品采用隐式差分,并解决后向传播的精确梯度。但是,是否有必要计算训练的这种精确但昂贵的渐变?在这项工作中,我们提出了一种新颖的梯度估计,用于隐式模型,命名为Phantom梯度,1)用于精确梯度的昂贵计算; 2)提供了对隐式模型培训的凭经质优选的更新方向。理论上,理论上可以分析可以找到损失景观的上升方向的条件,并基于阻尼展开和Neumann系列提供幻象梯度的两个特定实例化。大规模任务的实验表明,这些轻质幻像梯度大大加快了培训隐式模型中的后向往大约1.7倍,甚至基于想象成上的精确渐变来提高对方法的性能。
translated by 谷歌翻译
We propose a method to learn deep ReLU-based classifiers that are provably robust against normbounded adversarial perturbations on the training data. For previously unseen examples, the approach is guaranteed to detect all adversarial examples, though it may flag some non-adversarial examples as well. The basic idea is to consider a convex outer approximation of the set of activations reachable through a norm-bounded perturbation, and we develop a robust optimization procedure that minimizes the worst case loss over this outer region (via a linear program). Crucially, we show that the dual problem to this linear program can be represented itself as a deep network similar to the backpropagation network, leading to very efficient optimization approaches that produce guaranteed bounds on the robust loss. The end result is that by executing a few more forward and backward passes through a slightly modified version of the original network (though possibly with much larger batch sizes), we can learn a classifier that is provably robust to any norm-bounded adversarial attack. We illustrate the approach on a number of tasks to train classifiers with robust adversarial guarantees (e.g. for MNIST, we produce a convolutional classifier that provably has less than 5.8% test error for any adversarial attack with bounded ∞ norm less than = 0.1), and code for all experiments is available at http://github.com/ locuslab/convex_adversarial.
translated by 谷歌翻译
Recent work has demonstrated that deep neural networks are vulnerable to adversarial examples-inputs that are almost indistinguishable from natural data and yet classified incorrectly by the network. In fact, some of the latest findings suggest that the existence of adversarial attacks may be an inherent weakness of deep learning models. To address this problem, we study the adversarial robustness of neural networks through the lens of robust optimization. This approach provides us with a broad and unifying view on much of the prior work on this topic. Its principled nature also enables us to identify methods for both training and attacking neural networks that are reliable and, in a certain sense, universal. In particular, they specify a concrete security guarantee that would protect against any adversary. These methods let us train networks with significantly improved resistance to a wide range of adversarial attacks. They also suggest the notion of security against a first-order adversary as a natural and broad security guarantee. We believe that robustness against such well-defined classes of adversaries is an important stepping stone towards fully resistant deep learning models. 1
translated by 谷歌翻译
许多机器学习方法通过在推理时反转神经网络来运作,这已成为解决计算机视觉,机器人技术和图形中反向问题的流行技术。但是,这些方法通常涉及高度非凸损失景观的梯度下降,从而导致优化过程不稳定和缓慢。我们介绍了一种学习损失景观的方法,其中梯度下降是有效的,从而为反转过程带来了巨大的改进和加速。我们在许多生成和判别任务的方法上证明了这一优势,包括GAN倒置,对抗防御和3D人类姿势重建。
translated by 谷歌翻译
We show that standard ResNet architectures can be made invertible, allowing the same model to be used for classification, density estimation, and generation. Typically, enforcing invertibility requires partitioning dimensions or restricting network architectures. In contrast, our approach only requires adding a simple normalization step during training, already available in standard frameworks. Invertible ResNets define a generative model which can be trained by maximum likelihood on unlabeled data. To compute likelihoods, we introduce a tractable approximation to the Jacobian log-determinant of a residual block. Our empirical evaluation shows that invertible ResNets perform competitively with both stateof-the-art image classifiers and flow-based generative models, something that has not been previously achieved with a single architecture.
translated by 谷歌翻译
在过去的几年中,深层神经网络方法的反向成像问题产生了令人印象深刻的结果。在本文中,我们考虑在跨问题方法中使用生成模型。所考虑的正规派对图像进行了惩罚,这些图像远非生成模型的范围,该模型学会了产生类似于训练数据集的图像。我们命名这个家庭\ textit {生成正规派}。生成常规人的成功取决于生成模型的质量,因此我们提出了一组所需的标准来评估生成模型并指导未来的研究。在我们的数值实验中,我们根据我们所需的标准评估了三种常见的生成模型,自动编码器,变异自动编码器和生成对抗网络。我们还测试了三个不同的生成正规疗法仪,关于脱毛,反卷积和断层扫描的逆问题。我们表明,逆问题的限制解决方案完全位于生成模型的范围内可以给出良好的结果,但是允许与发电机范围的小偏差产生更一致的结果。
translated by 谷歌翻译
It is common practice in deep learning to represent a measurement of the world on a discrete grid, e.g. a 2D grid of pixels. However, the underlying signal represented by these measurements is often continuous, e.g. the scene depicted in an image. A powerful continuous alternative is then to represent these measurements using an implicit neural representation, a neural function trained to output the appropriate measurement value for any input spatial location. In this paper, we take this idea to its next level: what would it take to perform deep learning on these functions instead, treating them as data? In this context we refer to the data as functa, and propose a framework for deep learning on functa. This view presents a number of challenges around efficient conversion from data to functa, compact representation of functa, and effectively solving downstream tasks on functa. We outline a recipe to overcome these challenges and apply it to a wide range of data modalities including images, 3D shapes, neural radiance fields (NeRF) and data on manifolds. We demonstrate that this approach has various compelling properties across data modalities, in particular on the canonical tasks of generative modeling, data imputation, novel view synthesis and classification. Code: https://github.com/deepmind/functa
translated by 谷歌翻译
尽管机器学习系统的效率和可扩展性,但最近的研究表明,许多分类方法,尤其是深神经网络(DNN),易受对抗的例子;即,仔细制作欺骗训练有素的分类模型的例子,同时无法区分从自然数据到人类。这使得在安全关键区域中应用DNN或相关方法可能不安全。由于这个问题是由Biggio等人确定的。 (2013)和Szegedy等人。(2014年),在这一领域已经完成了很多工作,包括开发攻击方法,以产生对抗的例子和防御技术的构建防范这些例子。本文旨在向统计界介绍这一主题及其最新发展,主要关注对抗性示例的产生和保护。在数值实验中使用的计算代码(在Python和R)公开可用于读者探讨调查的方法。本文希望提交人们将鼓励更多统计学人员在这种重要的令人兴奋的领域的产生和捍卫对抗的例子。
translated by 谷歌翻译
一类非平滑实践优化问题可以写成,以最大程度地减少平滑且部分平滑的功能。我们考虑了这种结构化问题,这些问题也取决于参数矢量,并研究了将其解决方案映射相对于参数的问题,该参数在灵敏度分析和参数学习选择材料问题中具有很大的应用。我们表明,在部分平滑度和其他温和假设下,近端分裂算法产生的序列的自动分化(AD)会收敛于溶液映射的衍生物。对于一种自动分化的变体,我们称定点自动分化(FPAD),我们纠正了反向模式AD的内存开销问题,此外,理论上提供了更快的收敛。我们从数值上说明了套索和组套索问题的AD和FPAD的收敛性和收敛速率,并通过学习正则化项来证明FPAD在原型实用图像deoise问题上的工作。
translated by 谷歌翻译
现代神经网络Excel在图像分类中,但它们仍然容易受到常见图像损坏,如模糊,斑点噪音或雾。最近的方法关注这个问题,例如Augmix和Deepaulment,引入了在预期运行的防御,以期望图像损坏分布。相比之下,$ \ ell_p $ -norm界限扰动的文献侧重于针对最坏情况损坏的防御。在这项工作中,我们通过提出防范内人来调和两种方法,这是一种优化图像到图像模型的参数来产生对外损坏的增强图像的技术。我们理论上激发了我们的方法,并为其理想化版本的一致性以及大纲领提供了足够的条件。我们的分类机器在预期对CiFar-10-C进行的常见图像腐败基准上提高了最先进的,并改善了CIFAR-10和ImageNet上的$ \ ell_p $ -norm有界扰动的最坏情况性能。
translated by 谷歌翻译
Adaptive attacks have (rightfully) become the de facto standard for evaluating defenses to adversarial examples. We find, however, that typical adaptive evaluations are incomplete. We demonstrate that thirteen defenses recently published at ICLR, ICML and NeurIPS-and which illustrate a diverse set of defense strategies-can be circumvented despite attempting to perform evaluations using adaptive attacks. While prior evaluation papers focused mainly on the end result-showing that a defense was ineffective-this paper focuses on laying out the methodology and the approach necessary to perform an adaptive attack. Some of our attack strategies are generalizable, but no single strategy would have been sufficient for all defenses. This underlines our key message that adaptive attacks cannot be automated and always require careful and appropriate tuning to a given defense. We hope that these analyses will serve as guidance on how to properly perform adaptive attacks against defenses to adversarial examples, and thus will allow the community to make further progress in building more robust models.
translated by 谷歌翻译
概率生成模型对科学建模具有吸引力,因为它们的推论参数可用于生成假设和设计实验。这要求学习的模型提供了对输入数据的准确表示,并产生一个潜在空间,该空间有效地预测了与科学问题相关的结果。监督的变异自动编码器(SVAE)以前已用于此目的,在此目的中,精心设计的解码器可以用作可解释的生成模型,而监督目标可确保预测性潜在表示。不幸的是,监督的目标迫使编码器学习与生成后验分布有偏见的近似,这在科学模型中使用时使生成参数不可靠。由于通常用于评估模型性能的重建损失,因此该问题仍未被发现。我们通过开发一个二阶监督框架(SOS-VAE)来解决这个以前未报告的问题,该框架影响解码器诱导预测潜在的代表。这样可以确保关联的编码器保持可靠的生成解释。我们扩展了此技术,以使用户能够在生成参数中折叠以提高预测性能,并充当SVAE和我们的新SOS-VAE之间的中间选择。我们还使用这种方法来解决在组合来自多个科学实验的录音时经常出现的缺失数据问题。我们使用合成数据和电生理记录来证明这些发展的有效性,重点是如何使用我们学到的表示形式来设计科学实验。
translated by 谷歌翻译
使用显式密度建模的生成模型(例如,变形式自动码码器,基于流动的生成模型)涉及从已知分布的映射,例如,从已知分布中找到映射。高斯,到未知的输入分布。这通常需要搜索一类非线性函数(例如,由深神经网络表示)。在实践中有效,相关的运行时/内存成本可以迅速增加,通常是应用程序中所需性能的函数。我们提出了一个更便宜的(更简单)的策略来估算基于内核传输运算符中的已知结果的此映射。我们表明我们的配方能够实现高效的分布近似和采样,并提供令人惊讶的良好的经验性能,与强大的基线有利,但有很大的运行时储蓄。我们表明该算法在小样本大小设置(脑成像)中也表现良好。
translated by 谷歌翻译
反事实可以以人类的可解释方式解释神经网络的分类决策。我们提出了一种简单但有效的方法来产生这种反事实。更具体地说,我们执行合适的差异坐标转换,然后在这些坐标中执行梯度上升,以查找反事实,这些反事实是由置信度良好的指定目标类别分类的。我们提出了两种方法来利用生成模型来构建完全或大约差异的合适坐标系。我们使用Riemannian差异几何形状分析了生成过程,并使用各种定性和定量测量方法验证了生成的反事实质量。
translated by 谷歌翻译