While deeper convolutional networks are needed to achieve maximum accuracy in visual perception tasks, for many inputs shallower networks are sufficient. We exploit this observation by learning to skip convolutional layers on a per-input basis. We introduce SkipNet, a modified residual network, that uses a gating network to selectively skip convolutional blocks based on the activations of the previous layer. We formulate the dynamic skipping problem in the context of sequential decision making and propose a hybrid learning algorithm that combines supervised learning and reinforcement learning to address the challenges of non-differentiable skipping decisions. We show SkipNet reduces computation by 30 90% while preserving the accuracy of the original model on four benchmark datasets and outperforms the state-of-the-art dynamic networks and static compression methods. We also qualitatively evaluate the gating policy to reveal a relationship between image scale and saliency and the number of layers skipped.
translated by 谷歌翻译
Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections-one between each layer and its subsequent layer-our network has L(L+1) 2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less computation to achieve high performance. Code and pre-trained models are available at https://github.com/liuzhuang13/DenseNet.
translated by 谷歌翻译
While machine learning is traditionally a resource intensive task, embedded systems, autonomous navigation, and the vision of the Internet of Things fuel the interest in resource-efficient approaches. These approaches aim for a carefully chosen trade-off between performance and resource consumption in terms of computation and energy. The development of such approaches is among the major challenges in current machine learning research and key to ensure a smooth transition of machine learning technology from a scientific environment with virtually unlimited computing resources into everyday's applications. In this article, we provide an overview of the current state of the art of machine learning techniques facilitating these real-world requirements. In particular, we focus on deep neural networks (DNNs), the predominant machine learning models of the past decade. We give a comprehensive overview of the vast literature that can be mainly split into three non-mutually exclusive categories: (i) quantized neural networks, (ii) network pruning, and (iii) structural efficiency. These techniques can be applied during training or as post-processing, and they are widely used to reduce the computational demands in terms of memory footprint, inference speed, and energy efficiency. We also briefly discuss different concepts of embedded hardware for DNNs and their compatibility with machine learning techniques as well as potential for energy and latency reduction. We substantiate our discussion with experiments on well-known benchmark datasets using compression techniques (quantization, pruning) for a set of resource-constrained embedded systems, such as CPUs, GPUs and FPGAs. The obtained results highlight the difficulty of finding good trade-offs between resource efficiency and predictive performance.
translated by 谷歌翻译
现代卷积神经网络对图像中的每个像素应用相同的操作。但是,并非所有图像区域都同样重要。为了解决此效率低下,我们提出了一种动态应用在输入图像条件下的卷积的方法。我们引入了一个残留的块,其中一个小的门控分支学会了应评估哪些空间位置。这些离散的门控决策是使用Gumbel-Softmax技巧端到端训练的,结合了稀疏标准。我们对CIFAR,ImageNet和MPII的实验表明,与现有方法相比,我们的方法更好地关注感兴趣的区域和更好的准确性,并且在较低的计算复杂性下。此外,我们使用聚集筛选方法为我们的动态卷积提供了有效的CUDA实施,从而通过MobileNETV2残留块实现了推理速度的显着提高。根据人类姿势估计,一项固有的空间稀疏任务,处理速度增加了60%,而准确性没有损失。
translated by 谷歌翻译
动态神经网络是深度学习中的新兴的研究课题。与具有推断阶段的固定计算图和参数的静态模型相比,动态网络可以使其结构或参数适应不同的输入,从而在本调查中的准确性,计算效率,适应性等方面的显着优势。我们全面地通过将动态网络分为三个主要类别:1)使用数据相关的架构或参数进行处理的实例 - Wise-Wise DiveS动态模型的速度开发区域2)关于图像数据的不同空间位置和3)沿着诸如视频和文本的顺序数据的时间维度执行自适应推断的时间明智的动态模型进行自适应计算的空间 - 方向动态网络。系统地审查了动态网络的重要研究问题,例如架构设计,决策方案,优化技术和应用。最后,我们与有趣的未来研究方向讨论了该领域的开放问题。
translated by 谷歌翻译
混合精确的深神经网络达到了硬件部署所需的能源效率和吞吐量,尤其是在资源有限的情况下,而无需牺牲准确性。但是,不容易找到保留精度的最佳每层钻头精度,尤其是在创建巨大搜索空间的大量模型,数据集和量化技术中。为了解决这一困难,最近出现了一系列文献,并且已经提出了一些实现有希望的准确性结果的框架。在本文中,我们首先总结了文献中通常使用的量化技术。然后,我们对混合精液框架进行了彻底的调查,该调查是根据其优化技术进行分类的,例如增强学习和量化技术,例如确定性舍入。此外,讨论了每个框架的优势和缺点,我们在其中呈现并列。我们最终为未来的混合精液框架提供了指南。
translated by 谷歌翻译
减少大深度学习模型的处理时间的问题是许多现实世界应用中的根本挑战。早期退出方法通过将附加内部分类器(IC)附加到神经网络的中间层来努力实现这一目标。 IC可以快速返回简单示例的预测,结果,降低整个模型的平均推理时间。但是,如果特定IC不决定早期回答,则其预测被丢弃,其计算有效地浪费。为了解决这个问题,我们引入零时间浪费(ZTW),这是一种新的方法,其中每个IC重用由其前辈返回的预测(1)在IC和(2)之间以相对于类似的方式组合先前输出之间的直接连接。我们对各个数据集和架构进行了广泛的实验,以证明ZTW实现了比最近提出的早期退出方法的其他更好的比例与推理时间权衡。
translated by 谷歌翻译
深度卷积神经网络(CNNS)通常是复杂的设计,具有许多可学习的参数,用于准确性原因。为了缓解在移动设备上部署它们的昂贵成本,最近的作品使挖掘预定识别架构中的冗余作出了巨大努力。然而,尚未完全研究现代CNN的输入分辨率的冗余,即输入图像的分辨率是固定的。在本文中,我们观察到,用于准确预测给定图像的最小分辨率使用相同的神经网络是不同的。为此,我们提出了一种新颖的动态分辨率网络(DRNET),其中基于每个输入样本动态地确定输入分辨率。其中,利用所需网络共同地探索具有可忽略的计算成本的分辨率预测器。具体地,预测器学习可以保留的最小分辨率,并且甚至超过每个图像的原始识别准确性。在推断过程中,每个输入图像将被调整为其预测的分辨率,以最小化整体计算负担。然后,我们对几个基准网络和数据集进行了广泛的实验。结果表明,我们的DRNET可以嵌入到任何现成的网络架构中,以获得计算复杂性的相当大降低。例如,DR-RESET-50实现了类似的性能,计算减少约34%,同时增加了1.4%的准确度,与原始Resnet-50上的计算减少相比,在ImageNet上的原始resnet-50增加了10%。
translated by 谷歌翻译
在本文中,我们基于任何卷积神经网络中中间注意图的弱监督生成机制,并更加直接地披露了注意模块的有效性,以充分利用其潜力。鉴于现有的神经网络配备了任意注意模块,我们介绍了一个元评论家网络,以评估主网络中注意力图的质量。由于我们设计的奖励的离散性,提出的学习方法是在强化学习环境中安排的,在此设置中,注意力参与者和经常性的批评家交替优化,以提供临时注意力表示的即时批评和修订,因此,由于深度强化的注意力学习而引起了人们的关注。 (Dreal)。它可以普遍应用于具有不同类型的注意模块的网络体系结构,并通过最大程度地提高每个单独注意模块产生的最终识别性能的相对增益来促进其表现能力,如类别和实例识别基准的广泛实验所证明的那样。
translated by 谷歌翻译
由于稀疏神经网络通常包含许多零权重,因此可以在不降低网络性能的情况下潜在地消除这些不必要的网络连接。因此,设计良好的稀疏神经网络具有显着降低拖鞋和计算资源的潜力。在这项工作中,我们提出了一种新的自动修剪方法 - 稀疏连接学习(SCL)。具体地,重量被重新参数化为可培训权重变量和二进制掩模的元素方向乘法。因此,由二进制掩模完全描述网络连接,其由单位步进函数调制。理论上,从理论上证明了使用直通估计器(STE)进行网络修剪的基本原理。这一原则是STE的代理梯度应该是积极的,确保掩模变量在其最小值处收敛。在找到泄漏的Relu后,SoftPlus和Identity Stes可以满足这个原理,我们建议采用SCL的身份STE以进行离散面膜松弛。我们发现不同特征的面具梯度非常不平衡,因此,我们建议将每个特征的掩模梯度标准化以优化掩码变量训练。为了自动训练稀疏掩码,我们将网络连接总数作为我们的客观函数中的正则化术语。由于SCL不需要由网络层设计人员定义的修剪标准或超级参数,因此在更大的假设空间中探讨了网络,以实现最佳性能的优化稀疏连接。 SCL克服了现有自动修剪方法的局限性。实验结果表明,SCL可以自动学习并选择各种基线网络结构的重要网络连接。 SCL培训的深度学习模型以稀疏性,精度和减少脚波特的SOTA人类设计和自动修剪方法训练。
translated by 谷歌翻译
具有早期退出机制的最先进的神经网络通常需要大量的培训和微调,以通过低计算成本来实现良好的性能。我们提出了一种新颖的早期出口技术,基于样本的类手段,提前出口课程(E $^2 $ cm)。与大多数现有方案不同,E $^2 $ cm不需要基于梯度的内部分类器培训,并且不会通过任何方式修改基本网络。这使其对于低功率设备的神经网络培训特别有用,如无线边缘网络。我们评估了E $^2 $ cm的性能和间接费用,例如MobileNetV3,EdgisterNet,Resnet和数据集,例如CIFAR-100,Imagenet和KMNIST。我们的结果表明,鉴于固定的培训时间预算,与现有的早期退出机制相比,E $^2 $ cm的准确性更高。此外,如果培训时间预算没有限制,则可以将E $^2 $ cm与现有的早期退出计划相结合,以提高后者的性能,从而在计算成本和网络准确性之间取得更好的权衡。我们还表明,E $^2 $ cm可用于降低无监督学习任务中的计算成本。
translated by 谷歌翻译
深度学习技术在各种任务中都表现出了出色的有效性,并且深度学习具有推进多种应用程序(包括在边缘计算中)的潜力,其中将深层模型部署在边缘设备上,以实现即时的数据处理和响应。一个关键的挑战是,虽然深层模型的应用通常会产生大量的内存和计算成本,但Edge设备通常只提供非常有限的存储和计算功能,这些功能可能会在各个设备之间差异很大。这些特征使得难以构建深度学习解决方案,以释放边缘设备的潜力,同时遵守其约束。应对这一挑战的一种有希望的方法是自动化有效的深度学习模型的设计,这些模型轻巧,仅需少量存储,并且仅产生低计算开销。该调查提供了针对边缘计算的深度学习模型设计自动化技术的全面覆盖。它提供了关键指标的概述和比较,这些指标通常用于量化模型在有效性,轻度和计算成本方面的水平。然后,该调查涵盖了深层设计自动化技术的三类最新技术:自动化神经体系结构搜索,自动化模型压缩以及联合自动化设计和压缩。最后,调查涵盖了未来研究的开放问题和方向。
translated by 谷歌翻译
Very deep convolutional networks with hundreds of layers have led to significant reductions in error on competitive benchmarks. Although the unmatched expressiveness of the many layers can be highly desirable at test time, training very deep networks comes with its own set of challenges. The gradients can vanish, the forward flow often diminishes, and the training time can be painfully slow. To address these problems, we propose stochastic depth, a training procedure that enables the seemingly contradictory setup to train short networks and use deep networks at test time. We start with very deep networks but during training, for each mini-batch, randomly drop a subset of layers and bypass them with the identity function. This simple approach complements the recent success of residual networks. It reduces training time substantially and improves the test error significantly on almost all data sets that we used for evaluation. With stochastic depth we can increase the depth of residual networks even beyond 1200 layers and still yield meaningful improvements in test error (4.91% on CIFAR-10).
translated by 谷歌翻译
Deep residual networks were shown to be able to scale up to thousands of layers and still have improving performance. However, each fraction of a percent of improved accuracy costs nearly doubling the number of layers, and so training very deep residual networks has a problem of diminishing feature reuse, which makes these networks very slow to train. To tackle these problems, in this paper we conduct a detailed experimental study on the architecture of ResNet blocks, based on which we propose a novel architecture where we decrease depth and increase width of residual networks. We call the resulting network structures wide residual networks (WRNs) and show that these are far superior over their commonly used thin and very deep counterparts. For example, we demonstrate that even a simple 16-layer-deep wide residual network outperforms in accuracy and efficiency all previous deep residual networks, including thousand-layerdeep networks, achieving new state-of-the-art results on CIFAR, SVHN, COCO, and significant improvements on ImageNet. Our code and models are available at https: //github.com/szagoruyko/wide-residual-networks.
translated by 谷歌翻译
Deep neural networks have long training and processing times. Early exits added to neural networks allow the network to make early predictions using intermediate activations in the network in time-sensitive applications. However, early exits increase the training time of the neural networks. We introduce QuickNets: a novel cascaded training algorithm for faster training of neural networks. QuickNets are trained in a layer-wise manner such that each successive layer is only trained on samples that could not be correctly classified by the previous layers. We demonstrate that QuickNets can dynamically distribute learning and have a reduced training cost and inference cost compared to standard Backpropagation. Additionally, we introduce commitment layers that significantly improve the early exits by identifying for over-confident predictions and demonstrate its success.
translated by 谷歌翻译
Segblocks通过根据图像区域的复杂性动态调整处理分辨率来降低现有神经网络的计算成本。我们的方法将图像拆分为低复杂性的块和尺寸块块,从而减少了操作数量和内存消耗的数量。轻量级的政策网络选择复杂区域,是使用强化学习训练的。此外,我们介绍了CUDA中实现的几个模块以处理块中的图像。最重要的是,我们的新颖的阻止模块可以防止现有方法遭受的块边界的特征不连续性,同时保持记忆消耗受到控制。我们对语义分割的城市景观,Camvid和Mapillary Vistas数据集进行的实验表明,与具有相似复杂性的静态基准相比,动态处理图像与复杂性的折衷相对于复杂性更高。例如,我们的方法将SwiftNet-RN18的浮点操作数量降低了60%,并将推理速度提高50%,而CityScapes的MIOU准确性仅降低0.3%。
translated by 谷歌翻译
Structural pruning of neural network parameters reduces computation, energy, and memory transfer costs during inference. We propose a novel method that estimates the contribution of a neuron (filter) to the final loss and iteratively removes those with smaller scores. We describe two variations of our method using the first and secondorder Taylor expansions to approximate a filter's contribution. Both methods scale consistently across any network layer without requiring per-layer sensitivity analysis and can be applied to any kind of layer, including skip connections. For modern networks trained on ImageNet, we measured experimentally a high (>93%) correlation between the contribution computed by our methods and a reliable estimate of the true importance. Pruning with the proposed methods leads to an improvement over state-ofthe-art in terms of accuracy, FLOPs, and parameter reduction. On ResNet-101, we achieve a 40% FLOPS reduction by removing 30% of the parameters, with a loss of 0.02% in the top-1 accuracy on ImageNet. Code is available at https://github.com/NVlabs/Taylor_pruning.
translated by 谷歌翻译
In this work, we propose "Residual Attention Network", a convolutional neural network using attention mechanism which can incorporate with state-of-art feed forward network architecture in an end-to-end training fashion. Our Residual Attention Network is built by stacking Attention Modules which generate attention-aware features. The attention-aware features from different modules change adaptively as layers going deeper. Inside each Attention Module, bottom-up top-down feedforward structure is used to unfold the feedforward and feedback attention process into a single feedforward process. Importantly, we propose attention residual learning to train very deep Residual Attention Networks which can be easily scaled up to hundreds of layers.Extensive analyses are conducted on CIFAR-10 and CIFAR-100 datasets to verify the effectiveness of every module mentioned above. Our Residual Attention Network achieves state-of-the-art object recognition performance on three benchmark datasets including CIFAR-10 (3.90% error), CIFAR-100 (20.45% error) and ImageNet (4.8% single model and single crop, top-5 error). Note that, our method achieves 0.6% top-1 accuracy improvement with 46% trunk depth and 69% forward FLOPs comparing to ResNet-200. The experiment also demonstrates that our network is robust against noisy labels.
translated by 谷歌翻译
空间冗余广泛存在于视觉识别任务中,即图像或视频帧中的判别特征通常对应于像素的子集,而剩余区域与手头的任务无关。因此,在时间和空间消耗方面,处理具有相等计算量的所有像素的静态模型导致相当冗余。在本文中,我们将图像识别问题标准为顺序粗致细特征学习过程,模仿人类视觉系统。具体地,所提出的浏览和焦点网络(GFNET)首先以低分辨率比例提取输入图像的快速全局表示,然后策略性地参加一系列突出(小)区域以学习更精细的功能。顺序过程自然地促进了在测试时间的自适应推断,因为一旦模型对其预测充分信心,可以终止它,避免了进一步的冗余计算。值得注意的是,在我们模型中定位判别区域的问题被制定为增强学习任务,因此不需要除分类标签之外的其他手动注释。 GFNET是一般的,灵活,因为它与任何现成的骨干网型号(例如MobileCenets,Abservennet和TSM)兼容,可以方便地部署为特征提取器。对各种图像分类和视频识别任务的广泛实验以及各种骨干模型,证明了我们方法的显着效率。例如,它通过1.3倍降低了高效MobileNet-V3的平均等待时间,而不会牺牲精度。代码和预先训练的模型可在https://github.com/blackfeather-wang/gfnet-pytorch获得。
translated by 谷歌翻译
尽管现在使用自我监督方法构建的计算机视觉模型现在很普遍,但仍然存在一些重要问题。自我监督的模型是否学习高度冗余的频道功能?如果一个自我监督的网络可以动态选择重要的渠道并摆脱不必要的渠道怎么办?目前,与计算机视觉中的有监督的对手相比,通过自我训练预先训练的Convnet在下游任务上获得了可比的性能。但是,有一些自我监督模型的缺点,包括大量参数,计算昂贵的培训策略以及对下游任务更快推断的明确需求。在这项工作中,我们的目标是通过研究如何将用于监督学习的标准渠道选择方法应用于经过自学训练的网络。我们验证我们在一系列目标预算上验证我们的发现$ t_ {d} $,用于跨不同数据集的图像分类任务的频道计算,特别是CIFAR-10,CIFAR-100和IMAGENET-100,获得了与原始网络的可比性性能when selecting all channels but at a significant reduction in computation reported in terms of FLOPs.
translated by 谷歌翻译