Recently, Neural Architecture Search (NAS) has successfully identified neural network architectures that exceed human designed ones on large-scale image classification. In this paper, we study NAS for semantic image segmentation. Existing works often focus on searching the repeatable cell structure, while hand-designing the outer network structure that controls the spatial resolution changes. This choice simplifies the search space, but becomes increasingly problematic for dense image prediction which exhibits a lot more network level architectural variations. Therefore, we propose to search the network level structure in addition to the cell level structure, which forms a hierarchical architecture search space. We present a network level search space that includes many popular designs, and develop a formulation that allows efficient gradient-based architecture search (3 P100 GPU days on Cityscapes images). We demonstrate the effectiveness of the proposed method on the challenging Cityscapes, PASCAL VOC 2012, and ADE20K datasets. Auto-DeepLab, our architecture searched specifically for semantic image segmentation, attains state-of-the-art performance without any ImageNet pretraining. 1 * Work done while an intern at Google.
translated by 谷歌翻译
Spatial pyramid pooling module or encode-decoder structure are used in deep neural networks for semantic segmentation task. The former networks are able to encode multi-scale contextual information by probing the incoming features with filters or pooling operations at multiple rates and multiple effective fields-of-view, while the latter networks can capture sharper object boundaries by gradually recovering the spatial information. In this work, we propose to combine the advantages from both methods. Specifically, our proposed model, DeepLabv3+, extends DeepLabv3 by adding a simple yet effective decoder module to refine the segmentation results especially along object boundaries. We further explore the Xception model and apply the depthwise separable convolution to both Atrous Spatial Pyramid Pooling and decoder modules, resulting in a faster and stronger encoder-decoder network. We demonstrate the effectiveness of the proposed model on PASCAL VOC 2012 and Cityscapes datasets, achieving the test set performance of 89.0% and 82.1% without any post-processing. Our paper is accompanied with a publicly available reference implementation of the proposed models in Tensorflow at https: //github.com/tensorflow/models/tree/master/research/deeplab.
translated by 谷歌翻译
深度神经网络中的建筑进步导致了跨越一系列计算机视觉任务的巨大飞跃。神经建筑搜索(NAS)并没有依靠人类的专业知识,而是成为自动化建筑设计的有前途的途径。尽管图像分类的最新成就提出了机会,但NAS的承诺尚未对更具挑战性的语义细分任务进行彻底评估。将NAS应用于语义分割的主要挑战来自两个方面:(i)要处理的高分辨率图像; (ii)针对自动驾驶等应用的实时推理速度(即实时语义细分)的其他要求。为了应对此类挑战,我们在本文中提出了一种替代辅助的多目标方法。通过一系列自定义预测模型,我们的方法有效地将原始的NAS任务转换为普通的多目标优化问题。然后是用于填充选择的层次预筛选标准,我们的方法逐渐实现了一组有效的体系结构在细分精度和推理速度之间进行交易。对三个基准数据集的经验评估以及使用华为地图集200 dk的应用程序的实证评估表明,我们的方法可以识别架构明显优于人类专家手动设计和通过其他NAS方法自动设计的现有最先进的体系结构。
translated by 谷歌翻译
In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First, we highlight convolution with upsampled filters, or 'atrous convolution', as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second, we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third, we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed "DeepLab" system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7% mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online.
translated by 谷歌翻译
人们普遍认为,对于准确的语义细分,必须使用昂贵的操作(例如,非常卷积)结合使用昂贵的操作(例如非常卷积),从而导致缓慢的速度和大量的内存使用。在本文中,我们质疑这种信念,并证明既不需要高度的内部决议也不是必需的卷积。我们的直觉是,尽管分割是一个每像素的密集预测任务,但每个像素的语义通常都取决于附近的邻居和遥远的环境。因此,更强大的多尺度功能融合网络起着至关重要的作用。在此直觉之后,我们重新访问常规的多尺度特征空间(通常限制为P5),并将其扩展到更丰富的空间,最小的P9,其中最小的功能仅为输入大小的1/512,因此具有很大的功能接受场。为了处理如此丰富的功能空间,我们利用最近的BIFPN融合了多尺度功能。基于这些见解,我们开发了一个简化的分割模型,称为ESEG,该模型既没有内部分辨率高,也没有昂贵的严重卷积。也许令人惊讶的是,与多个数据集相比,我们的简单方法可以以比以前的艺术更快地实现更高的准确性。在实时设置中,ESEG-Lite-S在189 fps的CityScapes [12]上达到76.0%MIOU,表现优于更快的[9](73.1%MIOU时为170 fps)。我们的ESEG-LITE-L以79 fps的速度运行,达到80.1%MIOU,在很大程度上缩小了实时和高性能分割模型之间的差距。
translated by 谷歌翻译
语义细分是计算机视觉中的一个流行研究主题,并且在其上做出了许多努力,结果令人印象深刻。在本文中,我们打算搜索可以实时运行此问题的最佳网络结构。为了实现这一目标,我们共同搜索深度,通道,扩张速率和特征空间分辨率,从而导致搜索空间约为2.78*10^324可能的选择。为了处理如此大的搜索空间,我们利用差异架构搜索方法。但是,需要离散地使用使用现有差异方法搜索的体系结构参数,这会导致差异方法找到的架构参数与其离散版本作为体系结构搜索的最终解决方案之间的离散差距。因此,我们从解决方案空间正则化的创新角度来缓解离散差距的问题。具体而言,首先提出了新型的解决方案空间正则化(SSR)损失,以有效鼓励超级网络收敛到其离散。然后,提出了一种新的分层和渐进式解决方案空间缩小方法,以进一步实现较高的搜索效率。此外,我们从理论上表明,SSR损失的优化等同于L_0-NORM正则化,这说明了改善的搜索评估差距。综合实验表明,提出的搜索方案可以有效地找到最佳的网络结构,该结构具有较小的模型大小(1 m)的分割非常快的速度(175 fps),同时保持可比较的精度。
translated by 谷歌翻译
我们提出Segnext,这是一种简单的卷积网络体系结构,用于语义分割。由于自我注意力在编码空间信息中的效率,基于变压器的最新模型已主导语义分割领域。在本文中,我们表明卷积注意是一种比变形金刚中的自我注意机制更有效的编码上下文信息的方法。通过重新检查成功分割模型所拥有的特征,我们发现了几个关键组件,从而导致分割模型的性能提高。这促使我们设计了一个新型的卷积注意网络,该网络使用廉价的卷积操作。没有铃铛和哨子,我们的Segnext显着提高了先前最先进的方法对流行基准测试的性能,包括ADE20K,CityScapes,Coco-stuff,Pascal VOC,Pascal Context和ISAID。值得注意的是,segnext优于w/ nas-fpn的效率超过lavenet-l2,在帕斯卡VOC 2012测试排行榜上仅使用1/10参数,在Pascal VOC 2012测试排行榜上达到90.6%。平均而言,与具有相同或更少计算的ADE20K数据集上的最新方法相比,Segnext的改进约为2.0%。代码可在https://github.com/uyzhang/jseg(jittor)和https://github.com/visual-cratch-network/segnext(pytorch)获得。
translated by 谷歌翻译
We present the next generation of MobileNets based on a combination of complementary search techniques as well as a novel architecture design. MobileNetV3 is tuned to mobile phone CPUs through a combination of hardwareaware network architecture search (NAS) complemented by the NetAdapt algorithm and then subsequently improved through novel architecture advances. This paper starts the exploration of how automated search algorithms and network design can work together to harness complementary approaches improving the overall state of the art. Through this process we create two new MobileNet models for release: MobileNetV3-Large and MobileNetV3-Small which are targeted for high and low resource use cases. These models are then adapted and applied to the tasks of object detection and semantic segmentation. For the task of semantic segmentation (or any dense pixel prediction), we propose a new efficient segmentation decoder Lite Reduced Atrous Spatial Pyramid Pooling (LR-ASPP). We achieve new state of the art results for mobile classification, detection and segmentation. MobileNetV3-Large is 3.2% more accurate on ImageNet classification while reducing latency by 20% compared to MobileNetV2. MobileNetV3-Small is 6.6% more accurate compared to a MobileNetV2 model with comparable latency. MobileNetV3-Large detection is over 25% faster at roughly the same accuracy as Mo-bileNetV2 on COCO detection. MobileNetV3-Large LR-ASPP is 34% faster than MobileNetV2 R-ASPP at similar accuracy for Cityscapes segmentation.
translated by 谷歌翻译
我们展示了一个下一代神经网络架构,马赛克,用于移动设备上的高效和准确的语义图像分割。MOSAIC是通过各种移动硬件平台使用常用的神经操作设计,以灵活地部署各种移动平台。利用简单的非对称编码器 - 解码器结构,该解码器结构由有效的多尺度上下文编码器和轻量级混合解码器组成,以从聚合信息中恢复空间细节,Mosaic在平衡准确度和计算成本的同时实现了新的最先进的性能。基于搜索的分类网络,马赛克部署在定制的特征提取骨架顶部,达到目前行业标准MLPerf型号和最先进的架构,达到5%的绝对精度增益。
translated by 谷歌翻译
Australian Centre for Robotic Vision {guosheng.lin;anton.milan;chunhua.shen;
translated by 谷歌翻译
语义分割是计算机视觉中的关键任务之一,它是为图像中的每个像素分配类别标签。尽管最近取得了重大进展,但大多数现有方法仍然遇到两个具有挑战性的问题:1)图像中的物体和东西的大小可能非常多样化,要求将多规模特征纳入完全卷积网络(FCN); 2)由于卷积网络的固有弱点,很难分类靠近物体/物体的边界的像素。为了解决第一个问题,我们提出了一个新的多受感受性现场模块(MRFM),明确考虑了多尺度功能。对于第二期,我们设计了一个边缘感知损失,可有效区分对象/物体的边界。通过这两种设计,我们的多种接收场网络在两个广泛使用的语义分割基准数据集上实现了新的最先进的结果。具体来说,我们在CityScapes数据集上实现了83.0的平均值,在Pascal VOC2012数据集中达到了88.4的平均值。
translated by 谷歌翻译
深层神经网络(DNN)是通过依次执行线性和非线性过程产生的。使用线性和非线性程序的组合对于生成足够深的特征空间至关重要。大多数非线性运算符是激活函数或合并函数的推导。数学形态是数学的一个分支,为各种图像处理问题提供了非线性操作员。我们调查了将这些操作集成到本文端到端深度学习框架中的实用性。 DNN旨在获得特定工作的现实代表。形态运算符给出拓扑描述符,以传达有关图像中描述的物体形状的显着信息。我们提出了一种基于元学习的方法,将形态算子纳入DNN。博学的结构展示了我们的新型形态操作如何显着提高各种任务(包括图片分类和边缘检测)的DNN性能。
translated by 谷歌翻译
语义分割的最新进步通常在其在快速增加视野之后使用特殊的上下文模块来调整想象成掠夺骨干网。虽然成功,骨干,其中大部分计算谎言,但没有足够的足够大的视野来制定最佳决策。最近的进步通过快速下采样在骨干中采样分辨率来解决这个问题,同时还具有具有更高分辨率的一个或多个平行分支。我们通过设计resnext启发块结构来采用不同的方法,该结构使用具有不同扩张速率的两个平行的3x3卷积层,以增加视野,同时保留本地细节。通过在骨干中重复此块结构,我们不需要在它之后追加任何特殊的上下文模块。此外,我们提出了一种轻量级解码器,它比常见的替代方案更好地恢复本地信息。为了展示我们方法的有效性,我们的Model Regseg在实时城市景观和Camvid数据集上实现了最先进的结果。使用T4 GPU具有混合精度,Regseg达到78.3 Miou在Citycapes测试设置为30 FPS的测试,而80.9 miou在70 fps上设定的Camvid测试,两者都没有想象的预制。
translated by 谷歌翻译
Conventional neural architecture search (NAS) approaches are based on reinforcement learning or evolutionary strategy, which take more than 3000 GPU hours to find a good model on CIFAR-10. We propose an efficient NAS approach learning to search by gradient descent. Our approach represents the search space as a directed acyclic graph (DAG). This DAG contains billions of sub-graphs, each of which indicates a kind of neural architecture. To avoid traversing all the possibilities of the sub-graphs, we develop a differentiable sampler over the DAG. This sampler is learnable and optimized by the validation loss after training the sampled architecture. In this way, our approach can be trained in an end-to-end fashion by gradient descent, named Gradient-based search using Differentiable Architecture Sampler (GDAS). In experiments, we can finish one searching procedure in four GPU hours on CIFAR-10, and the discovered model obtains a test error of 2.82% with only 2.5M parameters, which is on par with the state-of-the-art. Code is publicly available on GitHub: https://github.com/D-X-Y/NAS-Projects.
translated by 谷歌翻译
We propose a new method for learning the structure of convolutional neural networks (CNNs) that is more efficient than recent state-of-the-art methods based on reinforcement learning and evolutionary algorithms. Our approach uses a sequential model-based optimization (SMBO) strategy, in which we search for structures in order of increasing complexity, while simultaneously learning a surrogate model to guide the search through structure space. Direct comparison under the same search space shows that our method is up to 5 times more efficient than the RL method of Zoph et al. (2018) in terms of number of models evaluated, and 8 times faster in terms of total compute. The structures we discover in this way achieve state of the art classification accuracies on CIFAR-10 and ImageNet.
translated by 谷歌翻译
最近,自我关注操作员将卓越的性能作为视觉模型的独立构建块。然而,现有的自我关注模型通常是手动设计的,从CNN修改,并仅通过堆叠一个操作员而获得。很少探索相结合不同的自我关注操作员和卷积的更广泛的建筑空间。在本文中,我们探讨了具有权重共享神经结构搜索(NAS)算法的新颖建筑空间。结果架构被命名为Triomet,用于组合卷积,局部自我关注和全球(轴向)自我关注操作员。为了有效地搜索在这个巨大的建筑空间中,我们提出了分层采样,以便更好地培训超空网。此外,我们提出了一种新的重量分享策略,多头分享,专门针对多头自我关注运营商。我们搜索的Tri of将自我关注和卷积相结合优于所有独立的模型,在想象网分类上具有较少的拖鞋,自我关注比卷积更好。此外,在各种小型数据集上,我们观察对自我关注模型的劣等性能,但我们的小脚仍然能够匹配这种情况下的最佳操作员,卷积。我们的代码可在https://github.com/phj128/trionet提供。
translated by 谷歌翻译
Image segmentation is a key topic in image processing and computer vision with applications such as scene understanding, medical image analysis, robotic perception, video surveillance, augmented reality, and image compression, among many others. Various algorithms for image segmentation have been developed in the literature. Recently, due to the success of deep learning models in a wide range of vision applications, there has been a substantial amount of works aimed at developing image segmentation approaches using deep learning models. In this survey, we provide a comprehensive review of the literature at the time of this writing, covering a broad spectrum of pioneering works for semantic and instance-level segmentation, including fully convolutional pixel-labeling networks, encoder-decoder architectures, multi-scale and pyramid based approaches, recurrent networks, visual attention models, and generative models in adversarial settings. We investigate the similarity, strengths and challenges of these deep learning models, examine the most widely used datasets, report performances, and discuss promising future research directions in this area.
translated by 谷歌翻译
语义分割是自主车辆了解周围场景的关键技术。当代模型的吸引力表现通常以牺牲重计算和冗长的推理时间为代价,这对于自行车来说是无法忍受的。在低分辨率图像上使用轻量级架构(编码器 - 解码器或双路)或推理,最近的方法实现了非常快的场景解析,即使在单个1080TI GPU上以100多件FPS运行。然而,这些实时方法与基于扩张骨架的模型之间的性能仍有显着差距。为了解决这个问题,我们提出了一家专门为实时语义细分设计的高效底座。所提出的深层双分辨率网络(DDRNET)由两个深部分支组成,之间进行多个双边融合。此外,我们设计了一个名为Deep聚合金字塔池(DAPPM)的新上下文信息提取器,以基于低分辨率特征映射放大有效的接收字段和熔丝多尺度上下文。我们的方法在城市景观和Camvid数据集上的准确性和速度之间实现了新的最先进的权衡。特别是,在单一的2080Ti GPU上,DDRNET-23-Slim在Camvid测试组上的Citycapes试验组102 FPS上的102 FPS,74.7%Miou。通过广泛使用的测试增强,我们的方法优于最先进的模型,需要计算得多。 CODES和培训的型号在线提供。
translated by 谷歌翻译
Incorporating multi-scale features in fully convolutional neural networks (FCNs) has been a key element to achieving state-of-the-art performance on semantic image segmentation. One common way to extract multi-scale features is to feed multiple resized input images to a shared deep network and then merge the resulting features for pixelwise classification. In this work, we propose an attention mechanism that learns to softly weight the multi-scale features at each pixel location. We adapt a state-of-the-art semantic image segmentation model, which we jointly train with multi-scale input images and the attention model. The proposed attention model not only outperforms averageand max-pooling, but allows us to diagnostically visualize the importance of features at different positions and scales. Moreover, we show that adding extra supervision to the output at each scale is essential to achieving excellent performance when merging multi-scale features. We demonstrate the effectiveness of our model with extensive experiments on three challenging datasets, including PASCAL-Person-Part,
translated by 谷歌翻译
Developing neural network image classification models often requires significant architecture engineering. In this paper, we study a method to learn the model architectures directly on the dataset of interest. As this approach is expensive when the dataset is large, we propose to search for an architectural building block on a small dataset and then transfer the block to a larger dataset. The key contribution of this work is the design of a new search space (which we call the "NASNet search space") which enables transferability. In our experiments, we search for the best convolutional layer (or "cell") on the CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking together more copies of this cell, each with their own parameters to design a convolutional architecture, which we name a "NASNet architecture". We also introduce a new regularization technique called ScheduledDropPath that significantly improves generalization in the NASNet models. On CIFAR-10 itself, a NASNet found by our method achieves 2.4% error rate, which is state-of-the-art. Although the cell is not searched for directly on ImageNet, a NASNet constructed from the best cell achieves, among the published works, state-of-the-art accuracy of 82.7% top-1 and 96.2% top-5 on ImageNet. Our model is 1.2% better in top-1 accuracy than the best human-invented architectures while having 9 billion fewer FLOPS -a reduction of 28% in computational demand from the previous state-of-the-art model. When evaluated at different levels of computational cost, accuracies of NASNets exceed those of the state-of-the-art human-designed models. For instance, a small version of NASNet also achieves 74% top-1 accuracy, which is 3.1% better than equivalently-sized, state-of-the-art models for mobile platforms. Finally, the image features learned from image classification are generically useful and can be transferred to other computer vision problems. On the task of object detection, the learned features by NASNet used with the Faster-RCNN framework surpass state-of-the-art by 4.0% achieving 43.1% mAP on the COCO dataset.
translated by 谷歌翻译