随着计算机愿景任务中的神经网络的不断发展,越来越多的网络架构取得了突出的成功。作为最先进的神经网络架构之一,DenSenet捷径所有特征映射都可以解决模型深度的问题。虽然这种网络架构在低MAC(乘法和累积)上具有优异的准确性,但它需要过度推理时间。为了解决这个问题,HardNet减少了特征映射之间的连接,使得其余连接类似于谐波。然而,这种压缩方法可能导致模型精度和增加的MAC和模型大小降低。该网络架构仅减少了内存访问时间,需要改进其整体性能。因此,我们提出了一种新的网络架构,使用阈值机制来进一步优化连接方法。丢弃不同卷积层的不同数量的连接以压缩阈值中的特征映射。所提出的网络架构使用了三个数据集,CiFar-10,CiFar-100和SVHN,以评估图像分类的性能。实验结果表明,与DENSENET相比,阈值可降低推理时间高达60%,并且在这些数据集上的硬盘相比,训练速度快高达35%的训练速度和20%的误差率降低。
translated by 谷歌翻译
通过在计算机视觉(CV)领域深度学习算法的良好性能,卷积神经网络(CNN)体系结构已成为计算机视觉任务的主要骨干。随着移动设备的广泛使用,基于计算能力低的平台的神经网络模型逐渐引起人们的注意。但是,由于计算能力的限制,移动设备上通常无法使用深度学习算法。本文提出了一个轻巧的卷积神经网络TripLenet,可以在Raspberry Pi上轻松运行。从阈值中的块连接概念中采用,新提出的网络模型会压缩并加速网络模型,减少网络的参数量,并在确保准确性的同时缩短每个图像的推理时间。我们提出的TripLenet和其他最先进的(SOTA)神经网络在Raspberry Pi上使用CIFAR-10和SVHN数据集进行了图像分类实验。实验结果表明,与GhostNet,Mobilenet,Theashnet,EdefityNet和HardNet相比,每图像的推理时间分别缩短了15%,16%,17%,24%和30%。
translated by 谷歌翻译
卷积神经网络(CNN)通过堆叠卷积层增加深度,而更深的网络模型在图像识别方面的表现更好。实证研究表明,简单地堆叠卷积层不会使网络训练更好,而跳过连接(残留学习)可以改善网络模型性能。对于图像分类任务,具有全球密集连接体系结构的模型在ImageNet等大型数据集中表现良好,但不适用于CIFAR-10和SVHN等小型数据集。与密集的连接不同,我们提出了两种连接层的新算法。基线是一个密集的连接网络,由两个新算法连接的网络分别命名为ShortNet1和ShortNet2。CIFAR-10和SVHN上图像分类的实验结果表明,ShortNet1的测试错误率低5%,推理时间比基线快25%。ShortNet2将推理时间加速40%,测试准确性损失较小。
translated by 谷歌翻译
深度神经网络在计算机视野领域取得了重大进展。最近的研究表明,神经网络架构的深度,宽度和快捷方式连接在其性能中起着至关重要的作用。最先进的神经网络架构DenSenet之一,通过密集连接实现了优异的收敛速率。但是,它仍然具有明显的缺点在内存量的使用情况。在本文中,我们介绍了一种新型的修剪工具,阈值,这是指MOSFET中阈值电压的原理。这项工作采用此方法以不同的方式连接不同深度的块以减少内存的使用情况。它表示为阈值。我们在CiFar10的数据集上评估阈值和其他不同网络。实验表明,HardNet是DenSenet的两倍,在此基础上,阈值比HardNet更快10%,误差率降低10%。
translated by 谷歌翻译
Deploying convolutional neural networks (CNNs) on embedded devices is difficult due to the limited memory and computation resources. The redundancy in feature maps is an important characteristic of those successful CNNs, but has rarely been investigated in neural architecture design. This paper proposes a novel Ghost module to generate more feature maps from cheap operations. Based on a set of intrinsic feature maps, we apply a series of linear transformations with cheap cost to generate many ghost feature maps that could fully reveal information underlying intrinsic features. The proposed Ghost module can be taken as a plug-and-play component to upgrade existing convolutional neural networks. Ghost bottlenecks are designed to stack Ghost modules, and then the lightweight Ghost-Net can be easily established. Experiments conducted on benchmarks demonstrate that the proposed Ghost module is an impressive alternative of convolution layers in baseline models, and our GhostNet can achieve higher recognition performance (e.g. 75.7% top-1 accuracy) than MobileNetV3 with similar computational cost on the ImageNet ILSVRC-2012 classification dataset. Code is available at https: //github.com/huawei-noah/ghostnet.
translated by 谷歌翻译
Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections-one between each layer and its subsequent layer-our network has L(L+1) 2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less computation to achieve high performance. Code and pre-trained models are available at https://github.com/liuzhuang13/DenseNet.
translated by 谷歌翻译
由于存储器和计算资源有限,部署在移动设备上的卷积神经网络(CNNS)是困难的。我们的目标是通过利用特征图中的冗余来设计包括CPU和GPU的异构设备的高效神经网络,这很少在神经结构设计中进行了研究。对于类似CPU的设备,我们提出了一种新颖的CPU高效的Ghost(C-Ghost)模块,以生成从廉价操作的更多特征映射。基于一组内在的特征映射,我们使用廉价的成本应用一系列线性变换,以生成许多幽灵特征图,可以完全揭示内在特征的信息。所提出的C-Ghost模块可以作为即插即用组件,以升级现有的卷积神经网络。 C-Ghost瓶颈旨在堆叠C-Ghost模块,然后可以轻松建立轻量级的C-Ghostnet。我们进一步考虑GPU设备的有效网络。在建筑阶段的情况下,不涉及太多的GPU效率(例如,深度明智的卷积),我们建议利用阶段明智的特征冗余来制定GPU高效的幽灵(G-GHOST)阶段结构。舞台中的特征被分成两个部分,其中使用具有较少输出通道的原始块处理第一部分,用于生成内在特征,另一个通过利用阶段明智的冗余来生成廉价的操作。在基准测试上进行的实验证明了所提出的C-Ghost模块和G-Ghost阶段的有效性。 C-Ghostnet和G-Ghostnet分别可以分别实现CPU和GPU的准确性和延迟的最佳权衡。代码可在https://github.com/huawei-noah/cv-backbones获得。
translated by 谷歌翻译
Currently, the neural network architecture design is mostly guided by the indirect metric of computation complexity, i.e., FLOPs. However, the direct metric, e.g., speed, also depends on the other factors such as memory access cost and platform characterics. Thus, this work proposes to evaluate the direct metric on the target platform, beyond only considering FLOPs. Based on a series of controlled experiments, this work derives several practical guidelines for efficient network design. Accordingly, a new architecture is presented, called ShuffleNet V2.Comprehensive ablation experiments verify that our model is the stateof-the-art in terms of speed and accuracy tradeoff.
translated by 谷歌翻译
二进制神经网络(BNN)是卷积神经网络(CNN)的极端量化版本,其所有功能和权重映射到仅1位。尽管BNN节省了大量的内存和计算需求以使CNN适用于边缘或移动设备,但由于二进制后的表示能力降低,BNN遭受了网络性能的下降。在本文中,我们提出了一个新的可更换且易于使用的卷积模块reponv,该模块reponv通过复制输入或沿通道维度的输出来增强特征地图,而不是$ \ beta $ times,而没有额外的参数和卷积计算费用。我们还定义了一组Reptran规则,可以在整个BNN模块中使用Repconv,例如二进制卷积,完全连接的层和批处理归一化。实验表明,在Reptran转换之后,一组高度引用的BNN与原始BNN版本相比,实现了普遍的性能。例如,Rep-Recu-Resnet-20的前1位准确性,即REPBCONV增强的RECU-RESNET-20,在CIFAR-10上达到了88.97%,比原始网络高1.47%。 Rep-Adambnn-Reactnet-A在Imagenet上获得了71.342%的TOP-1精度,这是BNN的最新结果。代码和型号可在以下网址提供:https://github.com/imfinethanks/rep_adambnn。
translated by 谷歌翻译
We introduce an extremely computation-efficient CNN architecture named ShuffleNet, which is designed specially for mobile devices with very limited computing power (e.g., 10-150 MFLOPs). The new architecture utilizes two new operations, pointwise group convolution and channel shuffle, to greatly reduce computation cost while maintaining accuracy. Experiments on ImageNet classification and MS COCO object detection demonstrate the superior performance of ShuffleNet over other structures, e.g. lower top-1 error (absolute 7.8%) than recent MobileNet [12] on Ima-geNet classification task, under the computation budget of 40 MFLOPs. On an ARM-based mobile device, ShuffleNet achieves ∼13× actual speedup over AlexNet while maintaining comparable accuracy.
translated by 谷歌翻译
最近已经设计了一些轻巧的卷积神经网络(CNN)模型,用于遥感对象检测(RSOD)。但是,他们中的大多数只是用可分离的卷积代替了香草卷积,这可能是由于很多精确损失而无法有效的,并且可能无法检测到方向的边界框(OBB)。同样,现有的OBB检测方法很难准确限制CNN预测的对象的形状。在本文中,我们提出了一个有效的面向轻质对象检测器(LO-DET)。具体而言,通道分离聚集(CSA)结构旨在简化可分开的卷积的复杂性,并开发了动态的接收场(DRF)机制,以通过自定义卷积内核及其感知范围来保持高精度,以保持高精度。网络复杂性。 CSA-DRF组件在保持高精度的同时优化了效率。然后,对角支撑约束头(DSC-Head)组件旨在检测OBB,并更准确,更稳定地限制其形状。公共数据集上的广泛实验表明,即使在嵌入式设备上,拟议的LO-DET也可以非常快地运行,具有检测方向对象的竞争精度。
translated by 谷歌翻译
我们提出了一种多移民通道(MGIC)方法,该方法可以解决参数数量相对于标准卷积神经网络(CNN)中的通道数的二次增长。因此,我们的方法解决了CNN中的冗余,这也被轻量级CNN的成功所揭示。轻巧的CNN可以达到与参数较少的标准CNN的可比精度。但是,权重的数量仍然随CNN的宽度四倍地缩放。我们的MGIC体系结构用MGIC对应物代替了每个CNN块,该块利用了小组大小的嵌套分组卷积的层次结构来解决此问题。因此,我们提出的架构相对于网络的宽度线性扩展,同时保留了通道的完整耦合,如标准CNN中。我们对图像分类,分割和点云分类进行的广泛实验表明,将此策略应用于Resnet和MobilenetV3等不同体系结构,可以减少参数的数量,同时获得相似或更好的准确性。
translated by 谷歌翻译
The deployment of deep convolutional neural networks (CNNs) in many real world applications is largely hindered by their high computational cost. In this paper, we propose a novel learning scheme for CNNs to simultaneously 1) reduce the model size; 2) decrease the run-time memory footprint; and 3) lower the number of computing operations, without compromising accuracy. This is achieved by enforcing channel-level sparsity in the network in a simple but effective way. Different from many existing approaches, the proposed method directly applies to modern CNN architectures, introduces minimum overhead to the training process, and requires no special software/hardware accelerators for the resulting models. We call our approach network slimming, which takes wide and large networks as input models, but during training insignificant channels are automatically identified and pruned afterwards, yielding thin and compact models with comparable accuracy. We empirically demonstrate the effectiveness of our approach with several state-of-the-art CNN models, including VGGNet, ResNet and DenseNet, on various image classification datasets. For VGGNet, a multi-pass version of network slimming gives a 20× reduction in model size and a 5× reduction in computing operations.
translated by 谷歌翻译
虽然残留连接使训练非常深的神经网络,但由于其多分支拓扑而​​导致在线推断不友好。这鼓励许多研究人员在推动时没有残留连接的情况下设计DNN。例如,repvgg在部署时将多分支拓扑重新参数化为vgg型(单分支)模型,当网络相对较浅时显示出具有很大的性能。但是,RepVGG不能等效地将Reset转换为VGG,因为重新参数化方法只能应用于线性块,并且必须将非线性层(Relu)放在残余连接之外,这导致了有限的表示能力,特别是更深入网络。在本文中,我们的目标是通过在Resblock上的保留和合并(RM)操作等效地纠正此问题,并提出删除Vanilla Reset中的残留连接。具体地,RM操作允许输入特征映射通过块,同时保留其信息,并在每个块的末尾合并所有信息,这可以去除残差而不改变原始输出。作为一个插件方法,RM操作基本上有三个优点:1)其实现使其实现高比率网络修剪。 2)它有助于打破RepVGG的深度限制。 3)与Reset和RepVGG相比,它导致更好的精度速度折衷网络(RMNet)。我们相信RM操作的意识形态可以激发对未来社区的模型设计的许多见解。代码可用:https://github.com/fxmeng/rmnet。
translated by 谷歌翻译
Designing a high-efficiency and high-quality expressive network architecture has always been the most important research topic in the field of deep learning. Most of today's network design strategies focus on how to integrate features extracted from different layers, and how to design computing units to effectively extract these features, thereby enhancing the expressiveness of the network. This paper proposes a new network design strategy, i.e., to design the network architecture based on gradient path analysis. On the whole, most of today's mainstream network design strategies are based on feed forward path, that is, the network architecture is designed based on the data path. In this paper, we hope to enhance the expressive ability of the trained model by improving the network learning ability. Due to the mechanism driving the network parameter learning is the backward propagation algorithm, we design network design strategies based on back propagation path. We propose the gradient path design strategies for the layer-level, the stage-level, and the network-level, and the design strategies are proved to be superior and feasible from theoretical analysis and experiments.
translated by 谷歌翻译
深度卷积神经网络(DCNN)辅助高动态范围(HDR)成像最近接受了很多关注。 DCNN生成的HDR图像的质量过于传统的对应物。然而,DCNN容易被计算密集和富力耗电。为了解决挑战,我们提出了用于极端双曝光图像融合的轻质CNN的基于轻型CNN的算法,这可以在具有有限的电力和硬件资源的各种嵌入式计算平台上实现。使用两个子网络:GlobalNet(g)和detailnet(d)。 G的目标是学习关于空间维度的全局信息,而D旨在增强通道维度的本地细节。 G和D都仅基于深度卷积(D CONC)和何时卷积(P CONV),以减少所需的参数和计算。实验结果显示所提出的技术可以在极其暴露的区域中产生具有合理细节的HDR图像。我们的模型超过了其他最先进的方法0.7至8.5,至于PSNR得分,并与其他方式达到7,675至463,385参数减少
translated by 谷歌翻译
Deep neural networks (DNNs) are currently widely used for many artificial intelligence (AI) applications including computer vision, speech recognition, and robotics. While DNNs deliver state-of-the-art accuracy on many AI tasks, it comes at the cost of high computational complexity. Accordingly, techniques that enable efficient processing of DNNs to improve energy efficiency and throughput without sacrificing application accuracy or increasing hardware cost are critical to the wide deployment of DNNs in AI systems.This article aims to provide a comprehensive tutorial and survey about the recent advances towards the goal of enabling efficient processing of DNNs. Specifically, it will provide an overview of DNNs, discuss various hardware platforms and architectures that support DNNs, and highlight key trends in reducing the computation cost of DNNs either solely via hardware design changes or via joint hardware design and DNN algorithm changes. It will also summarize various development resources that enable researchers and practitioners to quickly get started in this field, and highlight important benchmarking metrics and design considerations that should be used for evaluating the rapidly growing number of DNN hardware designs, optionally including algorithmic co-designs, being proposed in academia and industry.The reader will take away the following concepts from this article: understand the key design considerations for DNNs; be able to evaluate different DNN hardware implementations with benchmarks and comparison metrics; understand the trade-offs between various hardware architectures and platforms; be able to evaluate the utility of various DNN design techniques for efficient processing; and understand recent implementation trends and opportunities.
translated by 谷歌翻译
为了实现不断增长的准确性,通常会开发大型和复杂的神经网络。这样的模型需要高度的计算资源,因此不能在边缘设备上部署。由于它们在几个应用领域的有用性,建立资源有效的通用网络非常感兴趣。在这项工作中,我们努力有效地结合了CNN和变压器模型的优势,并提出了一种新的有效混合体系结构。特别是在EDGENEXT中,我们引入了分裂深度转置注意力(SDTA)编码器,该编码器将输入张量分解为多个通道组,并利用深度旋转以及跨通道维度的自我注意力,以隐含地增加接受场并编码多尺度特征。我们在分类,检测和分割任务上进行的广泛实验揭示了所提出的方法的优点,优于相对较低的计算要求的最先进方法。我们具有130万参数的EDGENEXT模型在Imagenet-1k上达到71.2 \%TOP-1的精度,超过移动设备的绝对增益为2.2 \%,而拖鞋减少了28 \%。此外,我们具有560万参数的EDGENEXT模型在Imagenet-1k上达到了79.4 \%TOP-1的精度。代码和模型可在https://t.ly/_vu9上公开获得。
translated by 谷歌翻译
对象检测是计算机视觉中的重要下游任务。对于车载边缘计算平台,很难实现实时检测要求。而且,由大量可分开的卷积层建立的轻巧模型无法达到足够的精度。我们引入了一种新的轻质卷积技术GSCONV,以减轻模型,但保持准确性。 GSCONV在模型的准确性和速度之间取得了极好的权衡。而且,我们提供了一个设计范式,即纤细的颈部,以实现探测器的更高计算成本效益。在二十多组比较实验中,我们的方法的有效性得到了强有力的证明。特别是,通过我们的方法改善的检测器获得了最先进的结果(例如,与原件相比,在Tesla T4 GPU上以〜100fps的速度为70.9%MAP0.5。代码可从https://github.com/alanli1997/slim-neck-by-gsconv获得。
translated by 谷歌翻译
卷积神经网络(CNN)已在许多物联网(IoT)设备中应用于多种下游任务。但是,随着边缘设备上的数据量的增加,CNN几乎无法及时完成某些任务,而计算和存储资源有限。最近,过滤器修剪被认为是压缩和加速CNN的有效技术,但是从压缩高维张量的角度来看,现有的方法很少是修剪CNN。在本文中,我们提出了一种新颖的理论,可以在三维张量中找到冗余信息,即量化特征图(QSFM)之间的相似性,并利用该理论来指导滤波器修剪过程。我们在数据集(CIFAR-10,CIFAR-100和ILSVRC-12)上执行QSFM和Edge设备,证明所提出的方法可以在神经网络中找到冗余信息,具有可比的压缩和可耐受的准确性下降。没有任何微调操作,QSFM可以显着压缩CIFAR-56(48.7%的Flops和57.9%的参数),而TOP-1的准确性仅损失0.54%。对于边缘设备的实际应用,QSFM可以将Mobilenet-V2推理速度加速1.53倍,而ILSVRC-12 TOP-1的精度仅损失1.23%。
translated by 谷歌翻译