Neural architecture search (NAS) is a promising research direction that has the potential to replace expert-designed networks with learned, task-specific architectures. In this work, in order to help ground the empirical results in this field, we propose new NAS baselines that build off the following observations: (i) NAS is a specialized hyperparameter optimization problem; and (ii) random search is a competitive baseline for hyperparameter optimization. Leveraging these observations, we evaluate both random search with early-stopping and a novel random search with weight-sharing algorithm on two standard NAS benchmarks-PTB and CIFAR-10. Our results show that random search with early-stopping is a competitive NAS baseline, e.g., it performs at least as well as ENAS [41], a leading NAS method, on both benchmarks. Additionally, random search with weight-sharing outperforms random search with early-stopping, achieving a state-of-the-art NAS result on PTB and a highly competitive result on CIFAR-10. Finally, we explore the existing reproducibility issues of published NAS results. We note the lack of source material needed to exactly reproduce these results, and further discuss the robustness of published results given the various sources of variability in NAS experimental setups. Relatedly, we provide all information (code, random seeds, documentation) needed to exactly reproduce our results, and report our random search with weight-sharing results for each benchmark on multiple runs.
translated by 谷歌翻译
This paper addresses the scalability challenge of architecture search by formulating the task in a differentiable manner. Unlike conventional approaches of applying evolution or reinforcement learning over a discrete and non-differentiable search space, our method is based on the continuous relaxation of the architecture representation, allowing efficient search of the architecture using gradient descent. Extensive experiments on CIFAR-10, ImageNet, Penn Treebank and WikiText-2 show that our algorithm excels in discovering high-performance convolutional architectures for image classification and recurrent architectures for language modeling, while being orders of magnitude faster than state-of-the-art non-differentiable techniques. Our implementation has been made publicly available to facilitate further research on efficient architecture search algorithms.
translated by 谷歌翻译
We propose Efficient Neural Architecture Search (ENAS), a fast and inexpensive approach for automatic model design. ENAS constructs a large computational graph, where each subgraph represents a neural network architecture, hence forcing all architectures to share their parameters. A controller is trained with policy gradient to search for a subgraph that maximizes the expected reward on a validation set. Meanwhile a model corresponding to the selected subgraph is trained to minimize a canonical cross entropy loss. Sharing parameters among child models allows ENAS to deliver strong empirical performances, whilst using much fewer GPU-hours than existing automatic model design approaches, and notably, 1000x less expensive than standard Neural Architecture Search. On Penn Treebank, ENAS discovers a novel architecture that achieves a test perplexity of 56.3, on par with the existing state-of-the-art among all methods without post-training processing. On CIFAR-10, ENAS finds a novel architecture that achieves 2.89% test error, which is on par with the 2.65% test error of NASNet (Zoph et al., 2018).
translated by 谷歌翻译
There is growing interest in automating neural network architecture design. Existing architecture search methods can be computationally expensive, requiring thousands of different architectures to be trained from scratch. Recent work has explored weight sharing across models to amortize the cost of training. Although previous methods reduced the cost of architecture search by orders of magnitude, they remain complex, requiring hypernetworks or reinforcement learning controllers. We aim to understand weight sharing for one-shot architecture search. With careful experimental analysis, we show that it is possible to efficiently identify promising architectures from a complex search space without either hypernetworks or RL.
translated by 谷歌翻译
We propose Efficient Neural Architecture Search (ENAS), a fast and inexpensive approach for automatic model design. In ENAS, a controller discovers neural network architectures by searching for an optimal subgraph within a large computational graph. The controller is trained with policy gradient to select a subgraph that maximizes the expected reward on a validation set. Meanwhile the model corresponding to the selected subgraph is trained to minimize a canonical cross entropy loss. Sharing parameters among child models allows ENAS to deliver strong empirical performances, while using much fewer GPUhours than existing automatic model design approaches, and notably, 1000x less expensive than standard Neural Architecture Search. On the Penn Treebank dataset, ENAS discovers a novel architecture that achieves a test perplexity of 55.8, establishing a new state-of-the-art among all methods without post-training processing. On the CIFAR-10 dataset, ENAS finds a novel architecture that achieves 2.89% test error, which is on par with the 2.65% test error of NAS-Net (Zoph et al., 2018).
translated by 谷歌翻译
Deep Learning has enabled remarkable progress over the last years on a variety of tasks, such as image recognition, speech recognition, and machine translation. One crucial aspect for this progress are novel neural architectures. Currently employed architectures have mostly been developed manually by human experts, which is a time-consuming and errorprone process. Because of this, there is growing interest in automated neural architecture search methods. We provide an overview of existing work in this field of research and categorize them according to three dimensions: search space, search strategy, and performance estimation strategy.
translated by 谷歌翻译
Recent advances in neural architecture search (NAS) demand tremendous computational resources, which makes it difficult to reproduce experiments and imposes a barrier-to-entry to researchers without access to large-scale computation. We aim to ameliorate these problems by introducing NAS-Bench-101, the first public architecture dataset for NAS research. To build NAS-Bench-101, we carefully constructed a compact, yet expressive, search space, exploiting graph isomorphisms to identify 423k unique convolutional architectures. We trained and evaluated all of these architectures multiple times on CIFAR-10 and compiled the results into a large dataset of over 5 million trained models. This allows researchers to evaluate the quality of a diverse range of models in milliseconds by querying the precomputed dataset. We demonstrate its utility by analyzing the dataset as a whole and by benchmarking a range of architecture optimization algorithms.
translated by 谷歌翻译
We propose a new method for learning the structure of convolutional neural networks (CNNs) that is more efficient than recent state-of-the-art methods based on reinforcement learning and evolutionary algorithms. Our approach uses a sequential model-based optimization (SMBO) strategy, in which we search for structures in order of increasing complexity, while simultaneously learning a surrogate model to guide the search through structure space. Direct comparison under the same search space shows that our method is up to 5 times more efficient than the RL method of Zoph et al. (2018) in terms of number of models evaluated, and 8 times faster in terms of total compute. The structures we discover in this way achieve state of the art classification accuracies on CIFAR-10 and ImageNet.
translated by 谷歌翻译
The automated machine learning (AutoML) field has become increasingly relevant in recent years. These algorithms can develop models without the need for expert knowledge, facilitating the application of machine learning techniques in the industry. Neural Architecture Search (NAS) exploits deep learning techniques to autonomously produce neural network architectures whose results rival the state-of-the-art models hand-crafted by AI experts. However, this approach requires significant computational resources and hardware investments, making it less appealing for real-usage applications. This article presents the third version of Pareto-Optimal Progressive Neural Architecture Search (POPNASv3), a new sequential model-based optimization NAS algorithm targeting different hardware environments and multiple classification tasks. Our method is able to find competitive architectures within large search spaces, while keeping a flexible structure and data processing pipeline to adapt to different tasks. The algorithm employs Pareto optimality to reduce the number of architectures sampled during the search, drastically improving the time efficiency without loss in accuracy. The experiments performed on images and time series classification datasets provide evidence that POPNASv3 can explore a large set of assorted operators and converge to optimal architectures suited for the type of data provided under different scenarios.
translated by 谷歌翻译
神经结构中的标准范例(NAS)是搜索具有特定操作和连接的完全确定性体系结构。在这项工作中,我们建议寻找最佳运行分布,从而提供了一种随机和近似解,可用于采样任意长度的架构。我们提出并显示,给定架构单元格,其性能主要取决于使用的操作的比率,而不是典型的搜索空间中的任何特定连接模式;也就是说,操作排序的小变化通常是无关紧要的。这种直觉与任何特定的搜索策略都具有正交,并且可以应用于多样化的NAS算法。通过对4数据集和4个NAS技术的广泛验证(贝叶斯优化,可分辨率搜索,本地搜索和随机搜索),我们表明操作分布(1)保持足够的辨别力来可靠地识别解决方案,并且(2)显着识别比传统的编码更容易优化,导致大量速度,几乎没有成本性能。实际上,这种简单的直觉显着降低了电流方法的成本,并可能使NAS用于更广泛的应用中。
translated by 谷歌翻译
Neural architectures can be naturally viewed as computational graphs. Motivated by this perspective, we, in this paper, study neural architecture search (NAS) through the lens of learning random graph models. In contrast to existing NAS methods which largely focus on searching for a single best architecture, i.e, point estimation, we propose GraphPNAS a deep graph generative model that learns a distribution of well-performing architectures. Relying on graph neural networks (GNNs), our GraphPNAS can better capture topologies of good neural architectures and relations between operators therein. Moreover, our graph generator leads to a learnable probabilistic search method that is more flexible and efficient than the commonly used RNN generator and random search methods. Finally, we learn our generator via an efficient reinforcement learning formulation for NAS. To assess the effectiveness of our GraphPNAS, we conduct extensive experiments on three search spaces, including the challenging RandWire on TinyImageNet, ENAS on CIFAR10, and NAS-Bench-101/201. The complexity of RandWire is significantly larger than other search spaces in the literature. We show that our proposed graph generator consistently outperforms RNN-based one and achieves better or comparable performances than state-of-the-art NAS methods.
translated by 谷歌翻译
自动化机器学习(AUTOML)是使机器学习模型被广泛应用于解决现实世界问题的重要步骤。尽管有许多研究的进步,但机器学习方法主要由于其数据隐私和安全法规而尚未完全被行业利用,因此在中心位置存储和计算增加数据量的高成本以及最重要的是缺乏专业知识。因此,我们介绍了一个新颖的框架,hanf -$ \ textbf {h} $ yperparameter $ \ textbf {a} $ nd $ \ textbf {n} $ earural架构搜索$ \ textbf {f}为在几个数据所有者服务器上分布的数据建立一个自动框架,而无需将数据带到中心位置。 HANF使用基于梯度的神经体系结构搜索和数据分布式设置中分别使用基于梯度的神经体系结构搜索和$ n $ armed Bandit方法来共同优化学习算法的神经体系结构和非构造超参数。我们表明,HANF有效地找到了优化的神经体系结构,并在数据所有者服务器上调整了超参数。此外,HANF可以在联合和非填充设置中应用。从经验上讲,我们表明HANF使用图像分类任务收敛于合适的体系结构和非架构高参数集。
translated by 谷歌翻译
Conventional neural architecture search (NAS) approaches are based on reinforcement learning or evolutionary strategy, which take more than 3000 GPU hours to find a good model on CIFAR-10. We propose an efficient NAS approach learning to search by gradient descent. Our approach represents the search space as a directed acyclic graph (DAG). This DAG contains billions of sub-graphs, each of which indicates a kind of neural architecture. To avoid traversing all the possibilities of the sub-graphs, we develop a differentiable sampler over the DAG. This sampler is learnable and optimized by the validation loss after training the sampled architecture. In this way, our approach can be trained in an end-to-end fashion by gradient descent, named Gradient-based search using Differentiable Architecture Sampler (GDAS). In experiments, we can finish one searching procedure in four GPU hours on CIFAR-10, and the discovered model obtains a test error of 2.82% with only 2.5M parameters, which is on par with the state-of-the-art. Code is publicly available on GitHub: https://github.com/D-X-Y/NAS-Projects.
translated by 谷歌翻译
神经结构搜索(NAS)的成功受到过度计算要求的限制。虽然现代重量共享NAS方法,例如飞镖在单位数GPU天中可以完成搜索,但从共享权重中提取最终的最佳架构是众所周知的不可靠性。培训 - 速度估计(TSE),最近开发的普遍开发的普遍估计,以贝叶斯边缘似然解释的用来代替飞镖基于梯度优化的验证损失。这可以防止飞镖跳过连接崩溃,这显着提高了NASBench-201和原始飞镖搜索空间的性能。我们通过应用各种飞镖诊断来扩展这些结果,并显示不使用验证集产生的几种不寻常的行为。此外,我们的实验产生了在与操作选择相比,尽管通常在文献中受到有限的关注,但仍会产生对搜索性能的强烈影响的深度间隙和拓扑选择的具体示例。
translated by 谷歌翻译
在最近,对表现良好的神经体系结构(NAS)的高效,自动化的搜索引起了人们的关注。因此,主要的研究目标是减少对神经体系结构进行昂贵评估的必要性,同时有效地探索大型搜索空间。为此,替代模型将体系结构嵌入了潜在的空间并预测其性能,而神经体系结构的生成模型则可以在生成器借鉴的潜在空间内基于优化的搜索。替代模型和生成模型都具有促进结构良好的潜在空间中的查询搜索。在本文中,我们通过利用有效的替代模型和生成设计的优势来进一步提高查询效率和有前途的建筑生成之间的权衡。为此,我们提出了一个与替代预测指标配对的生成模型,该模型迭代地学会了从越来越有希望的潜在子空间中生成样品。这种方法可导致非常有效和高效的架构搜索,同时保持查询量较低。此外,我们的方法允许以一种直接的方式共同优化准确性和硬件延迟等多个目标。我们展示了这种方法的好处,不仅是W.R.T.优化体系结构以提高最高分类精度,但在硬件约束和在单个NAS基准测试中的最新方法和多个目标的最先进方法的优化。我们还可以在Imagenet上实现最先进的性能。该代码可在http://github.com/jovitalukasik/ag-net上找到。
translated by 谷歌翻译
虽然神经结构的早期研究搜索(NAS)所需的极端计算资源,但最近的表格和代理基准的版本大大提高了NAS研究的速度和再现性。但是,两个最受欢迎的基准测试不为每个架构提供完整的培训信息。结果,在这些基准上,不可能运行许多类型的多保真技术,例如学习曲线外推,这些技术需要在任意时期评估架构。在这项工作中,我们介绍了一种使用奇异值分解和噪声建模的方法来创建代理基准,NAS-Bench-111,NAS-BENCH-311和NAS-BENCH-NLP11,其输出每个架构的完整培训信息而不是最终的验证准确性。我们通过引入学习曲线外推框架来修改单一保真算法来展示使用完整培训信息的力量,示出它导致改进流行的单保真算法,该算法在释放时声称最先进的单一保真算法。我们的代码和预用模型可在https://github.com/automl/nas-bench-x11中获得。
translated by 谷歌翻译
大多数现有的神经体系结构搜索(NAS)基准和算法优先考虑了良好的任务,例如CIFAR或Imagenet上的图像分类。这使得在更多样化的领域的NAS方法的表现知之甚少。在本文中,我们提出了NAS-Bench-360,这是一套基准套件,用于评估超出建筑搜索传统研究的域的方法,并使用它来解决以下问题:最先进的NAS方法在多样化的任务?为了构建基准测试,我们策划了十个任务,这些任务涵盖了各种应用程序域,数据集大小,问题维度和学习目标。小心地选择每个任务与现代CNN的搜索方法互操作,同时可能与其原始开发领域相距遥远。为了加快NAS研究的成本,对于其中两个任务,我们发布了包括标准CNN搜索空间的15,625个体系结构的预定性能。在实验上,我们表明需要对NAS BENCH-360进行更强大的NAS评估,从而表明几种现代NAS程序在这十个任务中执行不一致,并且有许多灾难性差的结果。我们还展示了NAS Bench-360及其相关的预算结果将如何通过测试NAS文献中最近推广的一些假设来实现未来的科学发现。 NAS-Bench-360托管在https://nb360.ml.cmu.edu上。
translated by 谷歌翻译
超参数优化(HPO)和神经体系结构搜索(NAS)是获得一流的机器学习模型的选择,但实际上,它们的运行成本很高。当在大型数据集上培训模型时,即使采用了有效的多志愿方法,对从业者进行HPO或NAS的调整迅速昂贵。我们提出了一种方法,以应对在具有有限计算资源的大型数据集上培训的调整机器学习模型的挑战。我们的方法名为Pasha,能够根据需要动态分配最大资源为调整过程。实验比较表明,Pasha识别出良好的超参数配置和体系结构,同时消耗的计算资源明显少于ASHA等解决方案。
translated by 谷歌翻译
虽然可分辨率的架构搜索(飞镖)已成为神经结构中的主流范例(NAS),因为其简单和效率,最近的作品发现,搜索架构的性能几乎可以随着飞镖的优化程序而增加,以及最终的大小由飞镖获得几乎无法表明运营的重要性。上述观察表明,飞镖中的监督信号可能是架构搜索的穷人或不可靠的指标,鼓励有趣和有趣的方向:我们可以衡量不可分辨率范式下的任何培训的运作重要性吗?我们通过在初始化问题的网络修剪中定制NAS提供肯定的答案。随着最近建议的突触突触效力标准在初始化的网络修剪中,我们寻求在没有任何培训的情况下将候选人行动中的候选人行动的重要性进行评分,并提出了一种名为“免费可分辨的架构搜索}(Freedarts)的小说框架” 。我们表明,没有任何培训,具有不同代理度量的自由路由器可以在不同的搜索空间中优于大多数NAS基线。更重要的是,Freedarts是非常内存的高效和计算效率,因为它放弃了架构搜索阶段的培训,使得能够在更灵活的空间上执行架构搜索并消除架构搜索和评估之间的深度间隙。我们希望我们的工作激励从初始化修剪的角度来激发解决NAS的尝试。
translated by 谷歌翻译
在NAS领域中,可分构造的架构搜索是普遍存在的,因为它的简单性和效率,其中两个范例,多路径算法和单路径方法主导。多路径框架(例如,DARTS)是直观的,但遭受内存使用和培训崩溃。单路径方法(例如,e.g.gdas和proxylesnnas)减轻了内存问题并缩小了搜索和评估之间的差距,但牺牲了性能。在本文中,我们提出了一种概念上简单的且有效的方法来桥接这两个范式,称为相互意识的子图可差架构搜索(MSG-DAS)。我们框架的核心是一个可分辨动的Gumbel-Topk采样器,它产生多个互斥的单路径子图。为了缓解多个子图形设置所带来的Severer Skip-Connect问题,我们提出了一个Dropblock-Identity模块来稳定优化。为了充分利用可用的型号(超级网和子图),我们介绍了一种记忆高效的超净指导蒸馏,以改善培训。所提出的框架击中了灵活的内存使用和搜索质量之间的平衡。我们展示了我们在想象中和CIFAR10上的方法的有效性,其中搜索的模型显示了与最近的方法相当的性能。
translated by 谷歌翻译