Practitioners prune neural networks for efficiency gains and generalization improvements, but few scrutinize the factors determining the prunability of a neural network the maximum fraction of weights that pruning can remove without compromising the model's test accuracy. In this work, we study the properties of input data that may contribute to the prunability of a neural network. For high dimensional input data such as images, text, and audio, the manifold hypothesis suggests that these high dimensional inputs approximately lie on or near a significantly lower dimensional manifold. Prior work demonstrates that the underlying low dimensional structure of the input data may affect the sample efficiency of learning. In this paper, we investigate whether the low dimensional structure of the input data affects the prunability of a neural network.
translated by 谷歌翻译
深度学习方法通​​过依靠极大的大量参数化神经网络来提供许多应用程序的最先进性能。但是,此类网络已被证明非常脆弱,并不能很好地概括为新用途案例,并且通常很难在资源有限的平台上部署。模型修剪,即减少网络的大小,是一种广泛采用的策略,可以导致更健壮和可推广的网络 - 通常较小的数量级,具有相同甚至改善的性能。尽管有许多用于修剪模型的启发式方法,但我们对修剪过程的理解仍然有限。实证研究表明,某些启发式方法可以改善性能,而另一些可以使模型更脆或具有其他副作用。这项工作旨在阐明不同的修剪方法如何改变网络的内部功能表示以及对模型性能的相应影响。为了提供模型特征空间的有意义的比较和表征,我们使用三个几何指标,这些指标是从共同采用的分类损失中分解的。使用这些指标,我们设计了一个可视化系统,以突出修剪对模型预测以及潜在功能嵌入的影响。所提出的工具为探索和研究修剪方法以及修剪和原始模型之间的差异提供了一个环境。通过利用我们的可视化,ML研究人员不仅可以识别模型修剪和数据损坏的样本,而且还可以获得有关某些修剪模型如何实现出色鲁棒性能的见解和解释。
translated by 谷歌翻译
Many applications require sparse neural networks due to space or inference time restrictions. There is a large body of work on training dense networks to yield sparse networks for inference, but this limits the size of the largest trainable sparse model to that of the largest trainable dense model. In this paper we introduce a method to train sparse neural networks with a fixed parameter count and a fixed computational cost throughout training, without sacrificing accuracy relative to existing dense-tosparse training methods. Our method updates the topology of the sparse network during training by using parameter magnitudes and infrequent gradient calculations. We show that this approach requires fewer floating-point operations (FLOPs) to achieve a given level of accuracy compared to prior techniques. We demonstrate state-of-the-art sparse training results on a variety of networks and datasets, including ResNet-50, MobileNets on Imagenet-2012, and RNNs on WikiText-103. Finally, we provide some insights into why allowing the topology to change during the optimization can overcome local minima encountered when the topology remains static * .
translated by 谷歌翻译
We study whether a neural network optimizes to the same, linearly connected minimum under different samples of SGD noise (e.g., random data order and augmentation). We find that standard vision models become stable to SGD noise in this way early in training. From then on, the outcome of optimization is determined to a linearly connected region. We use this technique to study iterative magnitude pruning (IMP), the procedure used by work on the lottery ticket hypothesis to identify subnetworks that could have trained in isolation to full accuracy. We find that these subnetworks only reach full accuracy when they are stable to SGD noise, which either occurs at initialization for small-scale settings (MNIST) or early in training for large-scale settings (ResNet-50 and Inception-v3 on ImageNet).
translated by 谷歌翻译
Neural network pruning techniques can reduce the parameter counts of trained networks by over 90%, decreasing storage requirements and improving computational performance of inference without compromising accuracy. However, contemporary experience is that the sparse architectures produced by pruning are difficult to train from the start, which would similarly improve training performance.We find that a standard pruning technique naturally uncovers subnetworks whose initializations made them capable of training effectively. Based on these results, we articulate the lottery ticket hypothesis: dense, randomly-initialized, feed-forward networks contain subnetworks (winning tickets) that-when trained in isolationreach test accuracy comparable to the original network in a similar number of iterations. The winning tickets we find have won the initialization lottery: their connections have initial weights that make training particularly effective.We present an algorithm to identify winning tickets and a series of experiments that support the lottery ticket hypothesis and the importance of these fortuitous initializations. We consistently find winning tickets that are less than 10-20% of the size of several fully-connected and convolutional feed-forward architectures for MNIST and CIFAR10. Above this size, the winning tickets that we find learn faster than the original network and reach higher test accuracy.
translated by 谷歌翻译
人们通常认为,修剪网络不仅会降低深网的计算成本,而且还可以通过降低模型容量来防止过度拟合。但是,我们的工作令人惊讶地发现,网络修剪有时甚至会加剧过度拟合。我们报告了出乎意料的稀疏双后裔现象,随着我们通过网络修剪增加模型稀疏性,首先测试性能变得更糟(由于过度拟合),然后变得更好(由于过度舒适),并且终于变得更糟(由于忘记了有用的有用信息)。尽管最近的研究集中在模型过度参数化方面,但他们未能意识到稀疏性也可能导致双重下降。在本文中,我们有三个主要贡献。首先,我们通过广泛的实验报告了新型的稀疏双重下降现象。其次,对于这种现象,我们提出了一种新颖的学习距离解释,即$ \ ell_ {2} $稀疏模型的学习距离(从初始化参数到最终参数)可能与稀疏的双重下降曲线良好相关,并更好地反映概括比最小平坦。第三,在稀疏的双重下降的背景下,彩票票假设中的获胜票令人惊讶地并不总是赢。
translated by 谷歌翻译
The rectified linear unit (ReLU) is a highly successful activation function in neural networks as it allows networks to easily obtain sparse representations, which reduces overfitting in overparameterized networks. However, in network pruning, we find that the sparsity introduced by ReLU, which we quantify by a term called dynamic dead neuron rate (DNR), is not beneficial for the pruned network. Interestingly, the more the network is pruned, the smaller the dynamic DNR becomes during optimization. This motivates us to propose a method to explicitly reduce the dynamic DNR for the pruned network, i.e., de-sparsify the network. We refer to our method as Activating-while-Pruning (AP). We note that AP does not function as a stand-alone method, as it does not evaluate the importance of weights. Instead, it works in tandem with existing pruning methods and aims to improve their performance by selective activation of nodes to reduce the dynamic DNR. We conduct extensive experiments using popular networks (e.g., ResNet, VGG) via two classical and three state-of-the-art pruning methods. The experimental results on public datasets (e.g., CIFAR-10/100) suggest that AP works well with existing pruning methods and improves the performance by 3% - 4%. For larger scale datasets (e.g., ImageNet) and state-of-the-art networks (e.g., vision transformer), we observe an improvement of 2% - 3% with AP as opposed to without. Lastly, we conduct an ablation study to examine the effectiveness of the components comprising AP.
translated by 谷歌翻译
提高深神经网络(DNN)对分布(OOD)数据的准确性对于在现实世界应用中接受深度学习(DL)至关重要。已经观察到,分布(ID)与OOD数据的准确性遵循线性趋势和模型表现优于该基线非常罕见(并被称为“有效鲁棒”)。最近,已经开发出一些有前途的方法来提高OOD的鲁棒性:模型修剪,数据增强和结合或零射门评估大型预审预周化模型。但是,仍然对观察有效鲁棒性所需的OOD数据和模型属性的条件尚无清晰的了解。我们通过对多种方法进行全面的经验研究来解决这个问题,这些方法已知会影响OOD鲁棒性,对CIFAR-10和Imagenet的广泛自然和合成分布转移。特别是,我们通过傅立叶镜头观察“有效的鲁棒性难题”,并询问模型和OOD数据的光谱特性如何影响相应的有效鲁棒性。我们发现这个傅立叶镜头提供了一些深入的了解,为什么某些强大的模型,尤其是夹家族的模型,可以实现稳健性。但是,我们的分析还清楚地表明,没有已知的指标始终是对OOD鲁棒性的最佳解释(甚至是强烈的解释)。因此,为了帮助未来对OOD难题的研究,我们通过引入一组预处理的模型(固定的模型),以有效的稳健性(可公开可鲁棒)解决了差距,这些模型(固有的模型)以及不同级别的OOD稳健性。
translated by 谷歌翻译
Network pruning is widely used for reducing the heavy inference cost of deep models in low-resource settings. A typical pruning algorithm is a three-stage pipeline, i.e., training (a large model), pruning and fine-tuning. During pruning, according to a certain criterion, redundant weights are pruned and important weights are kept to best preserve the accuracy. In this work, we make several surprising observations which contradict common beliefs. For all state-of-the-art structured pruning algorithms we examined, fine-tuning a pruned model only gives comparable or worse performance than training that model with randomly initialized weights. For pruning algorithms which assume a predefined target network architecture, one can get rid of the full pipeline and directly train the target network from scratch. Our observations are consistent for multiple network architectures, datasets, and tasks, which imply that: 1) training a large, over-parameterized model is often not necessary to obtain an efficient final model, 2) learned "important" weights of the large model are typically not useful for the small pruned model, 3) the pruned architecture itself, rather than a set of inherited "important" weights, is more crucial to the efficiency in the final model, which suggests that in some cases pruning can be useful as an architecture search paradigm. Our results suggest the need for more careful baseline evaluations in future research on structured pruning methods. We also compare with the "Lottery Ticket Hypothesis" (Frankle & Carbin, 2019), and find that with optimal learning rate, the "winning ticket" initialization as used in Frankle & Carbin (2019) does not bring improvement over random initialization. * Equal contribution. † Work done while visiting UC Berkeley.
translated by 谷歌翻译
悠久的作品历史表明,神经网络在训练场上很难推断。 Balesteriero等人最近的一项研究。 (2021)挑战这种观点:将插值定义为训练集凸壳的属性状态,他们表明,在输入或神经空间中,测试集在此凸面上都不能躺在大部分中数据的高维度,引用了众所周知的维度诅咒。然后,假定神经网络必须在外推性模式下起作用。我们在这里研究典型神经网络最后一层隐藏层的神经活动。使用自动编码器来揭示神经活动的固有空间,我们表明该空间实际上是低维的,并且模型越好,该内在空间的维度越低。在这个空间中,测试集的大多数样本实际上位于训练集的凸壳上:在凸船体的定义下,模型因此在插值方面起作用。此外,我们表明属于凸船体似乎不是相关标准。实际上,与训练集的近端近距离措施实际上更好地与性能准确性有关。因此,典型的神经网络似乎确实在插值方面起作用。良好的概括性能与神经网络在这种制度中运作良好的能力有关。
translated by 谷歌翻译
深层神经网络如今成功地拟合了非常复杂的功能,但是对于推理而言,密集的模型开始非常昂贵。为了减轻这种情况,一个有希望的方向是激活网络稀疏子图的网络。该子图是由数据依赖性路由函数选择的,将输入的固定映射到子网(例如,专家(MOE)在开关变压器中的混合物)。但是,先前的工作在很大程度上是经验的,尽管现有的路由功能在实践中效果很好,但它们并没有导致近似能力的理论保证。我们旨在为稀疏网络的力量提供理论解释。作为我们的第一个贡献,我们提出了一个与数据相关的稀疏网络的形式模型,该网络捕获了流行体系结构的显着方面。然后,我们基于局部性敏感哈希(LSH)引入一个路由函数,使我们能够对稀疏网络近似目标函数的方式进行推论。在用我们的模型代表基于LSH的稀疏网络之后,我们证明稀疏网络可以匹配Lipschitz函数上密集网络的近似能力。在输入向量上应用LSH意味着专家在输入空间的不同子区域中插值目标函数。为了支持我们的理论,我们根据Lipschitz的目标功能定义了各种数据集,并且我们表明,稀疏网络在活动数量数量和近似质量之间具有良好的权衡。
translated by 谷歌翻译
深度学习在学习高维数据的低维表示方面取得了巨大的成功。如果在感兴趣的数据中没有隐藏的低维结构,那么这一成功将是不可能的。这种存在是由歧管假设提出的,该假设指出数据在于固有维度低的未知流形。在本文中,我们认为该假设无法正确捕获数据中通常存在的低维结构。假设数据在于单个流形意味着整个数据空间的内在维度相同,并且不允许该空间的子区域具有不同数量的变异因素。为了解决这一缺陷,我们提出了多种假设的结合,该假设适应了非恒定固有维度的存在。我们从经验上验证了在常用图像数据集上的这一假设,发现确实应该允许内在维度变化。我们还表明,具有较高内在维度的类更难分类,以及如何使用这种见解来提高分类精度。然后,我们将注意力转移到该假设的影响下,在深层生成模型(DGM)的背景下。当前的大多数DGM都难以建模具有几个连接组件和/或不同固有维度的数据集建模。为了解决这些缺点,我们提出了群集的DGM,首先将数据聚集,然后在每个群集上训练DGM。我们表明,聚类的DGM可以模拟具有不同固有维度的多个连接组件,并在没有增加计算要求的情况下经验优于其非簇的非群体。
translated by 谷歌翻译
Deep neural networks can approximate functions on different types of data, from images to graphs, with varied underlying structure. This underlying structure can be viewed as the geometry of the data manifold. By extending recent advances in the theoretical understanding of neural networks, we study how a randomly initialized neural network with piece-wise linear activation splits the data manifold into regions where the neural network behaves as a linear function. We derive bounds on the density of boundary of linear regions and the distance to these boundaries on the data manifold. This leads to insights into the expressivity of randomly initialized deep neural networks on non-Euclidean data sets. We empirically corroborate our theoretical results using a toy supervised learning problem. Our experiments demonstrate that number of linear regions varies across manifolds and the results hold with changing neural network architectures. We further demonstrate how the complexity of linear regions is different on the low dimensional manifold of images as compared to the Euclidean space, using the MetFaces dataset.
translated by 谷歌翻译
彩票票证假设(LTH)指出,对于合理尺寸的神经网络,同一网络中的子网络的性能不如接受相同初始化训练时的密集对应。这项工作调查了模型大小与查找这些稀疏子网络的易用性之间的关系。我们通过实验表明,令人惊讶的是,在有限的预算下,较小的型号从票务搜索(TS)中受益更多。
translated by 谷歌翻译
转移学习是一种经典范式,通过该范式,在大型“上游”数据集上佩戴的模型适于在“下游”专业数据集中产生良好的结果。通常,据了解,“上游”数据集上的更准确的模型将提供更好的转移精度“下游”。在这项工作中,我们在想象的神经网络(CNNS)的背景下对这种现象进行了深入的调查,这些现象已经在想象的数据集上训练的情况下被修剪 - 这是通过缩小它们的连接来压缩。具体地,我们考虑使用通过应用几种最先进的修剪方法而获得的非结构化修剪模型的转移,包括基于幅度的,二阶,重新增长和正规化方法,在12个标准转移任务的上下文中。简而言之,我们的研究表明,即使在高稀稀物质,稀疏的型号也可以匹配或甚至优于致密模型的转移性能,并且在此操作时,可以导致显着的推论甚至培训加速度。与此同时,我们观察和分析不同修剪方法行为的显着差异。
translated by 谷歌翻译
修剪是稀疏深神经网络的任务,最近受到了越来越多的关注。尽管最先进的修剪方法提取了高度稀疏的模型,但它们忽略了两个主要挑战:(1)寻找这些稀疏模型的过程通常非常昂贵; (2)非结构化的修剪在GPU记忆,训练时间或碳排放方面没有提供好处。我们提出了通过梯度流量保存(早期CROP)提出的早期压缩,该压缩在训练挑战(1)的培训(1)中有效提取最先进的稀疏模型,并且可以以结构化的方式应用来应对挑战(2)。这使我们能够在商品GPU上训练稀疏的网络,该商品GPU的密集版本太大,从而节省了成本并减少了硬件要求。我们从经验上表明,早期杂交的表现优于许多任务(包括分类,回归)和域(包括计算机视觉,自然语言处理和增强学习)的丰富基线。早期杂交导致准确性与密集训练相当,同时超过修剪基线。
translated by 谷歌翻译
在这项工作中,我们使用变分推论来量化无线电星系分类的深度学习模型预测的不确定性程度。我们表明,当标记无线电星系时,个体测试样本的模型后差水平与人类不确定性相关。我们探讨了各种不同重量前沿的模型性能和不确定性校准,并表明稀疏事先产生更良好的校准不确定性估计。使用单个重量的后部分布,我们表明我们可以通过从最低信噪比(SNR)中除去权重来修剪30%的完全连接的层权重,而无需显着损失性能。我们证明,可以使用基于Fisher信息的排名来实现更大程度的修剪,但我们注意到两种修剪方法都会影响Failaroff-Riley I型和II型无线电星系的不确定性校准。最后,我们表明,与此领域的其他工作相比,我们经历了冷的后效,因此后部必须缩小后加权以实现良好的预测性能。我们检查是否调整成本函数以适应模型拼盘可以弥补此效果,但发现它不会产生显着差异。我们还研究了原则数据增强的效果,并发现这改善了基线,而且还没有弥补观察到的效果。我们将其解释为寒冷的后效,因为我们的培训样本过于有效的策划导致可能性拼盘,并将其提高到未来无线电银行分类的潜在问题。
translated by 谷歌翻译
深入学习的成功已归功于大量数据培训大量的过度公正模型。随着这种趋势的继续,模型培训已经过分昂贵,需要获得强大的计算系统来培训最先进的网络。一大堆研究已经致力于通过各种模型压缩技术解决训练的迭代的成本,如修剪和量化。花费较少的努力来定位迭代的数量。以前的工作,例如忘记得分和宏伟/ el2n分数,通过识别完整数据集中的重要样本并修剪剩余的样本来解决这个问题,从而减少每时代的迭代。虽然这些方法降低了训练时间,但它们在训练前使用昂贵的静态评分算法。在计入得分机制时,通常会增加总运行时间。在这项工作中,我们通过动态数据修剪算法解决了这种缺点。令人惊讶的是,我们发现均匀的随机动态修剪可以以积极的修剪速率更优于现有的工作。我们将其归因于存在“有时”样本 - 对学习决策边界很重要的点,只有一些培训时间。为了更好地利用有时样本的微妙性,我们提出了基于加强学习技术的两种算法,以动态修剪样本并实现比随机动态方法更高的准确性。我们针对全数据集基线和CIFAR-10和CIFAR-100上的先前工作测试所有方法,我们可以将培训时间降低到2倍,而无明显的性能损失。我们的结果表明,数据修剪应理解为与模型的训练轨迹密切相关的动态过程,而不是仅基于数据集的静态步骤。
translated by 谷歌翻译
Pruning large neural networks while maintaining their performance is often desirable due to the reduced space and time complexity. In existing methods, pruning is done within an iterative optimization procedure with either heuristically designed pruning schedules or additional hyperparameters, undermining their utility. In this work, we present a new approach that prunes a given network once at initialization prior to training. To achieve this, we introduce a saliency criterion based on connection sensitivity that identifies structurally important connections in the network for the given task. This eliminates the need for both pretraining and the complex pruning schedule while making it robust to architecture variations. After pruning, the sparse network is trained in the standard way. Our method obtains extremely sparse networks with virtually the same accuracy as the reference network on the MNIST, CIFAR-10, and Tiny-ImageNet classification tasks and is broadly applicable to various architectures including convolutional, residual and recurrent networks. Unlike existing methods, our approach enables us to demonstrate that the retained connections are indeed relevant to the given task.
translated by 谷歌翻译
Pruning refers to the elimination of trivial weights from neural networks. The sub-networks within an overparameterized model produced after pruning are often called Lottery tickets. This research aims to generate winning lottery tickets from a set of lottery tickets that can achieve similar accuracy to the original unpruned network. We introduce a novel winning ticket called Cyclic Overlapping Lottery Ticket (COLT) by data splitting and cyclic retraining of the pruned network from scratch. We apply a cyclic pruning algorithm that keeps only the overlapping weights of different pruned models trained on different data segments. Our results demonstrate that COLT can achieve similar accuracies (obtained by the unpruned model) while maintaining high sparsities. We show that the accuracy of COLT is on par with the winning tickets of Lottery Ticket Hypothesis (LTH) and, at times, is better. Moreover, COLTs can be generated using fewer iterations than tickets generated by the popular Iterative Magnitude Pruning (IMP) method. In addition, we also notice COLTs generated on large datasets can be transferred to small ones without compromising performance, demonstrating its generalizing capability. We conduct all our experiments on Cifar-10, Cifar-100 & TinyImageNet datasets and report superior performance than the state-of-the-art methods.
translated by 谷歌翻译