Conventional neural architecture search (NAS) approaches are based on reinforcement learning or evolutionary strategy, which take more than 3000 GPU hours to find a good model on CIFAR-10. We propose an efficient NAS approach learning to search by gradient descent. Our approach represents the search space as a directed acyclic graph (DAG). This DAG contains billions of sub-graphs, each of which indicates a kind of neural architecture. To avoid traversing all the possibilities of the sub-graphs, we develop a differentiable sampler over the DAG. This sampler is learnable and optimized by the validation loss after training the sampled architecture. In this way, our approach can be trained in an end-to-end fashion by gradient descent, named Gradient-based search using Differentiable Architecture Sampler (GDAS). In experiments, we can finish one searching procedure in four GPU hours on CIFAR-10, and the discovered model obtains a test error of 2.82% with only 2.5M parameters, which is on par with the state-of-the-art. Code is publicly available on GitHub: https://github.com/D-X-Y/NAS-Projects.
translated by 谷歌翻译
We propose Efficient Neural Architecture Search (ENAS), a fast and inexpensive approach for automatic model design. ENAS constructs a large computational graph, where each subgraph represents a neural network architecture, hence forcing all architectures to share their parameters. A controller is trained with policy gradient to search for a subgraph that maximizes the expected reward on a validation set. Meanwhile a model corresponding to the selected subgraph is trained to minimize a canonical cross entropy loss. Sharing parameters among child models allows ENAS to deliver strong empirical performances, whilst using much fewer GPU-hours than existing automatic model design approaches, and notably, 1000x less expensive than standard Neural Architecture Search. On Penn Treebank, ENAS discovers a novel architecture that achieves a test perplexity of 56.3, on par with the existing state-of-the-art among all methods without post-training processing. On CIFAR-10, ENAS finds a novel architecture that achieves 2.89% test error, which is on par with the 2.65% test error of NASNet (Zoph et al., 2018).
translated by 谷歌翻译
We propose Efficient Neural Architecture Search (ENAS), a fast and inexpensive approach for automatic model design. In ENAS, a controller discovers neural network architectures by searching for an optimal subgraph within a large computational graph. The controller is trained with policy gradient to select a subgraph that maximizes the expected reward on a validation set. Meanwhile the model corresponding to the selected subgraph is trained to minimize a canonical cross entropy loss. Sharing parameters among child models allows ENAS to deliver strong empirical performances, while using much fewer GPUhours than existing automatic model design approaches, and notably, 1000x less expensive than standard Neural Architecture Search. On the Penn Treebank dataset, ENAS discovers a novel architecture that achieves a test perplexity of 55.8, establishing a new state-of-the-art among all methods without post-training processing. On the CIFAR-10 dataset, ENAS finds a novel architecture that achieves 2.89% test error, which is on par with the 2.65% test error of NAS-Net (Zoph et al., 2018).
translated by 谷歌翻译
This paper addresses the scalability challenge of architecture search by formulating the task in a differentiable manner. Unlike conventional approaches of applying evolution or reinforcement learning over a discrete and non-differentiable search space, our method is based on the continuous relaxation of the architecture representation, allowing efficient search of the architecture using gradient descent. Extensive experiments on CIFAR-10, ImageNet, Penn Treebank and WikiText-2 show that our algorithm excels in discovering high-performance convolutional architectures for image classification and recurrent architectures for language modeling, while being orders of magnitude faster than state-of-the-art non-differentiable techniques. Our implementation has been made publicly available to facilitate further research on efficient architecture search algorithms.
translated by 谷歌翻译
We propose a new method for learning the structure of convolutional neural networks (CNNs) that is more efficient than recent state-of-the-art methods based on reinforcement learning and evolutionary algorithms. Our approach uses a sequential model-based optimization (SMBO) strategy, in which we search for structures in order of increasing complexity, while simultaneously learning a surrogate model to guide the search through structure space. Direct comparison under the same search space shows that our method is up to 5 times more efficient than the RL method of Zoph et al. (2018) in terms of number of models evaluated, and 8 times faster in terms of total compute. The structures we discover in this way achieve state of the art classification accuracies on CIFAR-10 and ImageNet.
translated by 谷歌翻译
在NAS领域中,可分构造的架构搜索是普遍存在的,因为它的简单性和效率,其中两个范例,多路径算法和单路径方法主导。多路径框架(例如,DARTS)是直观的,但遭受内存使用和培训崩溃。单路径方法(例如,e.g.gdas和proxylesnnas)减轻了内存问题并缩小了搜索和评估之间的差距,但牺牲了性能。在本文中,我们提出了一种概念上简单的且有效的方法来桥接这两个范式,称为相互意识的子图可差架构搜索(MSG-DAS)。我们框架的核心是一个可分辨动的Gumbel-Topk采样器,它产生多个互斥的单路径子图。为了缓解多个子图形设置所带来的Severer Skip-Connect问题,我们提出了一个Dropblock-Identity模块来稳定优化。为了充分利用可用的型号(超级网和子图),我们介绍了一种记忆高效的超净指导蒸馏,以改善培训。所提出的框架击中了灵活的内存使用和搜索质量之间的平衡。我们展示了我们在想象中和CIFAR10上的方法的有效性,其中搜索的模型显示了与最近的方法相当的性能。
translated by 谷歌翻译
Deep Learning has enabled remarkable progress over the last years on a variety of tasks, such as image recognition, speech recognition, and machine translation. One crucial aspect for this progress are novel neural architectures. Currently employed architectures have mostly been developed manually by human experts, which is a time-consuming and errorprone process. Because of this, there is growing interest in automated neural architecture search methods. We provide an overview of existing work in this field of research and categorize them according to three dimensions: search space, search strategy, and performance estimation strategy.
translated by 谷歌翻译
Developing neural network image classification models often requires significant architecture engineering. In this paper, we study a method to learn the model architectures directly on the dataset of interest. As this approach is expensive when the dataset is large, we propose to search for an architectural building block on a small dataset and then transfer the block to a larger dataset. The key contribution of this work is the design of a new search space (which we call the "NASNet search space") which enables transferability. In our experiments, we search for the best convolutional layer (or "cell") on the CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking together more copies of this cell, each with their own parameters to design a convolutional architecture, which we name a "NASNet architecture". We also introduce a new regularization technique called ScheduledDropPath that significantly improves generalization in the NASNet models. On CIFAR-10 itself, a NASNet found by our method achieves 2.4% error rate, which is state-of-the-art. Although the cell is not searched for directly on ImageNet, a NASNet constructed from the best cell achieves, among the published works, state-of-the-art accuracy of 82.7% top-1 and 96.2% top-5 on ImageNet. Our model is 1.2% better in top-1 accuracy than the best human-invented architectures while having 9 billion fewer FLOPS -a reduction of 28% in computational demand from the previous state-of-the-art model. When evaluated at different levels of computational cost, accuracies of NASNets exceed those of the state-of-the-art human-designed models. For instance, a small version of NASNet also achieves 74% top-1 accuracy, which is 3.1% better than equivalently-sized, state-of-the-art models for mobile platforms. Finally, the image features learned from image classification are generically useful and can be transferred to other computer vision problems. On the task of object detection, the learned features by NASNet used with the Faster-RCNN framework surpass state-of-the-art by 4.0% achieving 43.1% mAP on the COCO dataset.
translated by 谷歌翻译
自我关注架构被出现为最近提高视力任务表现的最新进步。手动确定自我关注网络的架构依赖于专家的经验,无法自动适应各种场景。同时,神经结构搜索(NAS)显着推出了神经架构的自动设计。因此,需要考虑使用NAS方法自动发现更好的自我关注架构。然而,由于基于细胞的搜索空间统一和缺乏长期内容依赖性,直接使用现有的NAS方法来搜索关注网络是具有挑战性的。为了解决这个问题,我们提出了一种基于全部关注的NAS方法。更具体地,构造阶段明智的搜索空间,其允许为网络的不同层采用各种关注操作。为了提取全局特征,提出了一种使用上下文自动回归来发现全部关注架构的自我监督的搜索算法。为了验证所提出的方法的功效,我们对各种学习任务进行了广泛的实验,包括图像分类,细粒度的图像识别和零拍摄图像检索。经验结果表明,我们的方法能够发现高性能,全面关注架构,同时保证所需的搜索效率。
translated by 谷歌翻译
Neural architecture search (NAS) is a promising research direction that has the potential to replace expert-designed networks with learned, task-specific architectures. In this work, in order to help ground the empirical results in this field, we propose new NAS baselines that build off the following observations: (i) NAS is a specialized hyperparameter optimization problem; and (ii) random search is a competitive baseline for hyperparameter optimization. Leveraging these observations, we evaluate both random search with early-stopping and a novel random search with weight-sharing algorithm on two standard NAS benchmarks-PTB and CIFAR-10. Our results show that random search with early-stopping is a competitive NAS baseline, e.g., it performs at least as well as ENAS [41], a leading NAS method, on both benchmarks. Additionally, random search with weight-sharing outperforms random search with early-stopping, achieving a state-of-the-art NAS result on PTB and a highly competitive result on CIFAR-10. Finally, we explore the existing reproducibility issues of published NAS results. We note the lack of source material needed to exactly reproduce these results, and further discuss the robustness of published results given the various sources of variability in NAS experimental setups. Relatedly, we provide all information (code, random seeds, documentation) needed to exactly reproduce our results, and report our random search with weight-sharing results for each benchmark on multiple runs.
translated by 谷歌翻译
Recently, Neural Architecture Search (NAS) has successfully identified neural network architectures that exceed human designed ones on large-scale image classification. In this paper, we study NAS for semantic image segmentation. Existing works often focus on searching the repeatable cell structure, while hand-designing the outer network structure that controls the spatial resolution changes. This choice simplifies the search space, but becomes increasingly problematic for dense image prediction which exhibits a lot more network level architectural variations. Therefore, we propose to search the network level structure in addition to the cell level structure, which forms a hierarchical architecture search space. We present a network level search space that includes many popular designs, and develop a formulation that allows efficient gradient-based architecture search (3 P100 GPU days on Cityscapes images). We demonstrate the effectiveness of the proposed method on the challenging Cityscapes, PASCAL VOC 2012, and ADE20K datasets. Auto-DeepLab, our architecture searched specifically for semantic image segmentation, attains state-of-the-art performance without any ImageNet pretraining. 1 * Work done while an intern at Google.
translated by 谷歌翻译
Neural architecture search (NAS) has a great impact by automatically designing effective neural network architectures. However, the prohibitive computational demand of conventional NAS algorithms (e.g. 10 4 GPU hours) makes it difficult to directly search the architectures on large-scale tasks (e.g. ImageNet). Differentiable NAS can reduce the cost of GPU hours via a continuous representation of network architecture but suffers from the high GPU memory consumption issue (grow linearly w.r.t. candidate set size). As a result, they need to utilize proxy tasks, such as training on a smaller dataset, or learning with only a few blocks, or training just for a few epochs. These architectures optimized on proxy tasks are not guaranteed to be optimal on the target task. In this paper, we present ProxylessNAS that can directly learn the architectures for large-scale target tasks and target hardware platforms. We address the high memory consumption issue of differentiable NAS and reduce the computational cost (GPU hours and GPU memory) to the same level of regular training while still allowing a large candidate set. Experiments on CIFAR-10 and ImageNet demonstrate the effectiveness of directness and specialization. On CIFAR-10, our model achieves 2.08% test error with only 5.7M parameters, better than the previous state-of-the-art architecture AmoebaNet-B, while using 6× fewer parameters. On ImageNet, our model achieves 3.1% better top-1 accuracy than MobileNetV2, while being 1.2× faster with measured GPU latency. We also apply ProxylessNAS to specialize neural architectures for hardware with direct hardware metrics (e.g. latency) and provide insights for efficient CNN architecture design. 1
translated by 谷歌翻译
与其他基于架构的NAS方法不同,广泛的神经结构搜索(BNA)提出了一个广泛的,它由卷积和增强块组成,被称为广泛的卷积神经网络(BCNN)作为搜索空间,以惊人的效率改进。 BCNN重用卷积块中的单元格的拓扑,使得BNA可以使用很少的小区以获得有效的搜索。此外,提出了多尺度特征融合和知识嵌入,以提高BCNN具有浅层拓扑的性能。然而,BNA遭受了一些缺点:1)特征融合和增强的代表性多样性不足,2)人类专家对知识嵌入设计的耗时。在本文中,我们提出了堆叠的BNA,其搜索空间是名为堆叠BCNN的开发的广泛可扩展架构,性能比BNA更好。一方面,堆叠的BCNN将Mini-BCNN视为保存综合表示的基本块,并提供强大的特征提取能力。另一方面,我们提出了知识嵌入搜索(KES)来学习适当的知识嵌入。实验结果表明,1)堆叠的BNA获得比BNA,2)KES有助于降低具有令人满意的性能的学习架构参数,3)堆叠BNA可提供0.02 GPU天的最新效率。
translated by 谷歌翻译
Designing accurate and efficient ConvNets for mobile devices is challenging because the design space is combinatorially large. Due to this, previous neural architecture search (NAS) methods are computationally expensive. ConvNet architecture optimality depends on factors such as input resolution and target devices. However, existing approaches are too resource demanding for case-by-case redesigns. Also, previous work focuses primarily on reducing FLOPs, but FLOP count does not always reflect actual latency. To address these, we propose a differentiable neural architecture search (DNAS) framework that uses gradient-based methods to optimize Con-vNet architectures, avoiding enumerating and training individual architectures separately as in previous methods. FBNets (Facebook-Berkeley-Nets), a family of models discovered by DNAS surpass state-of-the-art models both designed manually and generated automatically. FBNet-B achieves 74.1% top-1 accuracy on ImageNet with 295M FLOPs and 23.1 ms latency on a Samsung S8 phone, 2.4x smaller and 1.5x faster than MobileNetV2-1.3[17] with similar accuracy. Despite higher accuracy and lower latency than MnasNet[20], we estimate FBNet-B's search cost is 420x smaller than MnasNet's, at only 216 GPUhours. Searched for different resolutions and channel sizes, FBNets achieve 1.5% to 6.4% higher accuracy than Mo-bileNetV2. The smallest FBNet achieves 50.2% accuracy and 2.9 ms latency (345 frames per second) on a Samsung S8. Over a Samsung-optimized FBNet, the iPhone-Xoptimized model achieves a 1.4x speedup on an iPhone X. FBNet models are open-sourced at https://github. com/facebookresearch/mobile-vision. * Work done while interning at Facebook.… Figure 1. Differentiable neural architecture search (DNAS) for ConvNet design. DNAS explores a layer-wise space that each layer of a ConvNet can choose a different block. The search space is represented by a stochastic super net. The search process trains the stochastic super net using SGD to optimize the architecture distribution. Optimal architectures are sampled from the trained distribution. The latency of each operator is measured on target devices and used to compute the loss for the super net.
translated by 谷歌翻译
从错误中学习是一种有效的学习方法,广泛用于人类学习,学习者将更加注重未来规避犯罪的错误。它有助于改善整体学习结果。在这项工作中,我们的目标是调查这种特殊学习能力的有效性如何用于改善机器学习模型。我们提出了一种简单有效的多层次优化框架,称为学习的错误(LFM),灵感来自错误驱动的学习,培训更好的机器学习模型。我们的LFM框架包括涉及三个学习阶段的配方。主要目标是通过使用重新加权技术训练模型来执行目标任务,以防止将来类似的错误。在这种制定中,我们通过最小化模型的验证丢失来学习类重量,并通过来自类明智性能和实际数据的图像生成器重新列出模型的验证丢失来重新列车。我们在图像分类数据集等差分架构搜索方法应用我们的LFM框架,如CiFar和Imagenet,结果表明了我们提出的策略的有效性。
translated by 谷歌翻译
作为梯度引导的搜索方法,可区分的神经体系结构搜索(飞镖)大大降低了计算成本,并加快了搜索的速度。在飞镖中,将体系结构参数引入候选操作,但是某些配备权重的操作的参数可能在初始阶段训练不好,这会导致候选操作之间的不公平竞争。无重量的操作大量出现,导致性能崩溃现象。此外,在训练超网中将占用许多内存,这会导致内存利用率较低。在本文中,提出了基于通道注意的部分通道连接,以进行可区分的神经体系结构搜索(ADARTS)。一些具有较高权重的通道是通过注意机制选择的,并将其他通道直接与处理的通道接触到操作空间。选择一些具有较高注意力权重的通道可以更好地将重要的功能信息传输到搜索空间中,并大大提高搜索效率和内存利用率。也可以避免由随机选择引起的网络结构的不稳定性。实验结果表明,ADART在CIFAR-10和CIFAR-100上分别达到了2.46%和17.06%的分类错误率。 Adarts可以有效地解决一个问题,即搜索过程中出现过多的跳过连接并获得具有更好性能的网络结构。
translated by 谷歌翻译
深度学习技术在各种任务中都表现出了出色的有效性,并且深度学习具有推进多种应用程序(包括在边缘计算中)的潜力,其中将深层模型部署在边缘设备上,以实现即时的数据处理和响应。一个关键的挑战是,虽然深层模型的应用通常会产生大量的内存和计算成本,但Edge设备通常只提供非常有限的存储和计算功能,这些功能可能会在各个设备之间差异很大。这些特征使得难以构建深度学习解决方案,以释放边缘设备的潜力,同时遵守其约束。应对这一挑战的一种有希望的方法是自动化有效的深度学习模型的设计,这些模型轻巧,仅需少量存储,并且仅产生低计算开销。该调查提供了针对边缘计算的深度学习模型设计自动化技术的全面覆盖。它提供了关键指标的概述和比较,这些指标通常用于量化模型在有效性,轻度和计算成本方面的水平。然后,该调查涵盖了深层设计自动化技术的三类最新技术:自动化神经体系结构搜索,自动化模型压缩以及联合自动化设计和压缩。最后,调查涵盖了未来研究的开放问题和方向。
translated by 谷歌翻译
The automated machine learning (AutoML) field has become increasingly relevant in recent years. These algorithms can develop models without the need for expert knowledge, facilitating the application of machine learning techniques in the industry. Neural Architecture Search (NAS) exploits deep learning techniques to autonomously produce neural network architectures whose results rival the state-of-the-art models hand-crafted by AI experts. However, this approach requires significant computational resources and hardware investments, making it less appealing for real-usage applications. This article presents the third version of Pareto-Optimal Progressive Neural Architecture Search (POPNASv3), a new sequential model-based optimization NAS algorithm targeting different hardware environments and multiple classification tasks. Our method is able to find competitive architectures within large search spaces, while keeping a flexible structure and data processing pipeline to adapt to different tasks. The algorithm employs Pareto optimality to reduce the number of architectures sampled during the search, drastically improving the time efficiency without loss in accuracy. The experiments performed on images and time series classification datasets provide evidence that POPNASv3 can explore a large set of assorted operators and converge to optimal architectures suited for the type of data provided under different scenarios.
translated by 谷歌翻译
可微分的架构搜索逐渐成为神经结构中的主流研究主题,以实现与早期NAS(基于EA的RL的)方法相比提高效率的能力。最近的可分辨率NAS还旨在进一步提高搜索效率,降低GPU记忆消耗,并解决“深度间隙”问题。然而,这些方法不再能够解决非微弱目标,更不用说多目标,例如性能,鲁棒性,效率和其他指标。我们提出了一个端到端的架构搜索框架,朝向非微弱的目标TND-NAS,具有在多目标NAs(MNA)中的不同NAS框架中的高效率的优点和兼容性的兼容性(MNA)。在可分辨率的NAS框架下,随着搜索空间的连续放松,TND-NAS具有在离散空间中优化的架构参数($ \ alpha $),同时通过$ \ alpha $逐步缩小超缩小的搜索策略。我们的代表性实验需要两个目标(参数,准确性),例如,我们在CIFAR10上实现了一系列高性能紧凑型架构(1.09米/ 3.3%,2.4M / 2.95%,9.57M / 2.54%)和CIFAR100(2.46 M / 18.3%,5.46 / 16.73%,12.88 / 15.20%)数据集。有利地,在现实世界的情景下(资源受限,平台专用),TND-NA可以方便地达到Pareto-Optimal解决方案。
translated by 谷歌翻译
在本文中,我们提出了一种基于沙普利价值的方法来评估用于神经体系结构搜索的操作贡献(Shapley-NAS)。可区分的体系结构搜索(DARTS)通过使用梯度下降优化体系结构参数来获取最佳体系结构,从而大大降低了搜索成本。但是,梯度下降更新的体系结构参数的幅度未能揭示对任务性能的实际操作重要性,因此损害了获得的体系结构的有效性。相比之下,我们建议评估操作对验证准确性的直接影响。为了处理超级核成分之间的复杂关系,我们通过考虑所有可能的组合来利用Shapley的价值来量化其边际贡献。具体而言,我们通过Shapley值评估操作贡献来迭代优化SuperNet权重,并更新体系结构参数,从而通过选择对任务贡献显着贡献的操作来得出最佳体系结构。由于Shapley值的确切计算是NP-HARD,因此采用了基于早期截断的蒙特卡洛抽样算法进行有效的近似,并且采用了动量更新机制来减轻采样过程的波动。在各种数据集和各种搜索空间上进行的广泛实验表明,我们的Shapley-NAS的表现优于最先进的方法,并具有相当大的利润,并具有轻盈的搜索成本。该代码可从https://github.com/euphoria16/shapley-nas.git获得
translated by 谷歌翻译