随着深度神经网络(DNN)的出现,成为许多计算机视觉任务中的骨干,它们在现实世界中的消费应用程序中的采用不断扩大。鉴于智能设备的丰富性和无所不能,正在形成“智能生态系统”,同时进行感应而不是独立。这将处式推理范式转移到在边缘部署集中式神经加工单元(NPU),其中多个设备(例如,在智能家居或自动驾驶汽车中)可以通过动态速率流式传输数据以进行处理。尽管这为输入批处理提供了增强的潜力,但幼稚的解决方案可以导致表现不佳的性能和经验质量,尤其是在尖峰负载下。同时,动态DNN的部署,包括随机计算图(例如早期 - 外观(EE)模型),引入了此类系统中动态行为的新维度。在这项工作中,我们提出了一种新颖的早期感知的调度算法,该算法允许在运行时进行样本抢占,以说明到达和早期外来过程引入的动态性。同时,我们向NPU硬件体系结构的设计空间介绍了两个新颖的维度,即流体批处理和可堆叠的处理元素,这些元素可以使运行时适应性适应不同的批次尺寸,并显着改善了NPU利用率,即使在小批次尺寸下也是如此。我们的评估表明,我们的系统分别在平均延迟和尾部潜伏期SLO满意度方面,平均达到1.97倍和6.7倍的改善。
translated by 谷歌翻译
基于注意力的神经网络在许多AI任务中都普遍存在。尽管其出色的算法性能,但注意力机制和前馈网络(FFN)的使用仍需要过多的计算和内存资源,这通常会损害其硬件性能。尽管已经引入了各种稀疏变体,但大多数方法仅着重于缓解算法级别上的二次注意力缩放,而无需明确考虑将其方法映射到真实硬件设计上的效率。此外,大多数努力仅专注于注意机制或FFN,但没有共同优化这两个部分,导致当前的大多数设计在处理不同的输入长度时缺乏可扩展性。本文从硬件角度系统地考虑了不同变体中的稀疏模式。在算法级别上,我们提出了Fabnet,这是一种适合硬件的变体,它采用统一的蝴蝶稀疏模式来近似关注机制和FFN。在硬件级别上,提出了一种新颖的适应性蝴蝶加速器,可以在运行时通过专用硬件控件配置,以使用单个统一的硬件引擎加速不同的蝴蝶层。在远程 - ARENA数据集上,FabNet达到了与香草变压器相同的精度,同时将计算量减少10到66次,参数数量为2至22次。通过共同优化算法和硬件,我们的基于FPGA的蝴蝶加速器在归一化到同一计算预算的最新加速器上达到了14.2至23.2倍的速度。与Raspberry Pi 4和Jetson Nano上优化的CPU和GPU设计相比,我们的系统在相同的功率预算下的最大273.8和15.1倍。
translated by 谷歌翻译
语义细分是许多视觉系统的骨干,从自动驾驶汽车和机器人导航到增强现实和电信。在有限的资源信封内经常在严格的延迟约束下运行,对有效执行的优化变得很重要。同时,目标平台的异质功能以及不同应用程序的不同限制需要设计和培训多个针对特定目标的细分模型,从而导致过度维护成本。为此,我们提出了一个框架,用于将最新的分割CNN转换为多EXIT语义细分(MESS)网络:经过特殊训练的模型,这些模型沿其深度沿其深度进行参数化的早期出口到i)在推断过程中动态保存计算更容易的样本和ii)通过提供可定制的速度准确性权衡来节省培训和维护成本。设计和培训此类网络天真地损害了性能。因此,我们为多EXIT网络提出了新颖的两期培训方案。此外,Mess的参数化可以使附件分割头的数字,位置和体系结构以及退出策略通过详尽的搜索在<1GPUH中进行部署。这使得混乱能够快速适应每个目标用例的设备功能和应用要求,并提供火车一路上的部署解决方案。与原始的骨干网络相比,Mess变体具有相同精度的潜伏期增长率高达2.83倍,而相同的计算预算的潜伏期提高到同一计算预算的准确性高5.33 pp。最后,与最先进的技术相比,MESS提供了更快的架构选择订单。
translated by 谷歌翻译
Multimodal learning pipelines have benefited from the success of pretrained language models. However, this comes at the cost of increased model parameters. In this work, we propose Adapted Multimodal BERT (AMB), a BERT-based architecture for multimodal tasks that uses a combination of adapter modules and intermediate fusion layers. The adapter adjusts the pretrained language model for the task at hand, while the fusion layers perform task-specific, layer-wise fusion of audio-visual information with textual BERT representations. During the adaptation process the pre-trained language model parameters remain frozen, allowing for fast, parameter-efficient training. In our ablations we see that this approach leads to efficient models, that can outperform their fine-tuned counterparts and are robust to input noise. Our experiments on sentiment analysis with CMU-MOSEI show that AMB outperforms the current state-of-the-art across metrics, with 3.4% relative reduction in the resulting error and 2.1% relative improvement in 7-class classification accuracy.
translated by 谷歌翻译
We consider the problem of modelling high-dimensional distributions and generating new examples of data with complex relational feature structure coherent with a graph skeleton. The model we propose tackles the problem of generating the data features constrained by the specific graph structure of each data point by splitting the task into two phases. In the first it models the distribution of features associated with the nodes of the given graph, in the second it complements the edge features conditionally on the node features. We follow the strategy of implicit distribution modelling via generative adversarial network (GAN) combined with permutation equivariant message passing architecture operating over the sets of nodes and edges. This enables generating the feature vectors of all the graph objects in one go (in 2 phases) as opposed to a much slower one-by-one generations of sequential models, prevents the need for expensive graph matching procedures usually needed for likelihood-based generative models, and uses efficiently the network capacity by being insensitive to the particular node ordering in the graph representation. To the best of our knowledge, this is the first method that models the feature distribution along the graph skeleton allowing for generations of annotated graphs with user specified structures. Our experiments demonstrate the ability of our model to learn complex structured distributions through quantitative evaluation over three annotated graph datasets.
translated by 谷歌翻译
We extend Textual Inversion to learn pseudo-words that represent a concept at different resolutions. This allows us to generate images that use the concept with different levels of detail and also to manipulate different resolutions using language. Once learned, the user can generate images at different levels of agreement to the original concept; "A photo of $S^*(0)$" produces the exact object while the prompt "A photo of $S^*(0.8)$" only matches the rough outlines and colors. Our framework allows us to generate images that use different resolutions of an image (e.g. details, textures, styles) as separate pseudo-words that can be composed in various ways. We open-soure our code in the following URL: https://github.com/giannisdaras/multires_textual_inversion
translated by 谷歌翻译
Modern Deep Learning (DL) models have grown to sizes requiring massive clusters of specialized, high-end nodes to train. Designing such clusters to maximize both performance and utilization to amortize their steep cost is a challenging task requiring careful balance of compute, memory, and network resources. Moreover, a plethora of each model's tuning knobs drastically affect the performance, with optimal values often depending on the underlying cluster's characteristics, which necessitates a complex cluster-workload co-design process. To facilitate the design space exploration of such massive DL training clusters, we introduce COMET a holistic cluster design methodology and workflow to jointly study the impact of parallelization strategies and key cluster resource provisioning on the performance of distributed DL training. We develop a step-by-step process to establish a reusable and flexible methodology, and demonstrate its application with a case study of training a Transformer-1T model on a cluster of variable compute, memory, and network resources. Our case study demonstrates COMET's utility in identifying promising architectural optimization directions and guiding system designers in configuring key model and cluster parameters.
translated by 谷歌翻译
This paper aims to design robust Edge Intelligence using semantic communication for time-critical IoT applications. We systematically analyze the effect of image DCT coefficients on inference accuracy and propose the channel-agnostic effectiveness encoding for offloading by transmitting the most meaningful task data first. This scheme can well utilize all available communication resource and strike a balance between transmission latency and inference accuracy. Then, we design an effectiveness decoding by implementing a novel image augmentation process for convolutional neural network (CNN) training, through which an original CNN model is transformed into a Robust CNN model. We use the proposed training method to generate Robust MobileNet-v2 and Robust ResNet-50. The proposed Edge Intelligence framework consists of the proposed effectiveness encoding and effectiveness decoding. The experimental results show that the effectiveness decoding using the Robust CNN models perform consistently better under various image distortions caused by channel errors or limited communication resource. The proposed Edge Intelligence framework using semantic communication significantly outperforms the conventional approach under latency and data rate constraints, in particular, under ultra stringent deadlines and low data rate.
translated by 谷歌翻译
For time-critical IoT applications using deep learning, inference acceleration through distributed computing is a promising approach to meet a stringent deadline. In this paper, we implement a working prototype of a new distributed inference acceleration method HALP using three raspberry Pi 4. HALP accelerates inference by designing a seamless collaboration among edge devices (EDs) in Edge Computing. We maximize the parallelization between communication and computation among the collaborative EDs by optimizing the task partitioning ratio based on the segment-based partitioning. Experimental results show that the distributed inference HALP achieves 1.7x inference acceleration for VGG-16. Then, we combine distributed inference with conventional neural network model compression by setting up different shrinking hyperparameters for MobileNet-V1. In this way, we can further accelerate inference but at the cost of inference accuracy loss. To strike a balance between latency and accuracy, we propose dynamic model selection to select a model which provides the highest accuracy within the latency constraint. It is shown that the model selection with distributed inference HALP can significantly improve service reliability compared to the conventional stand-alone computation.
translated by 谷歌翻译
Near infrared (NIR) to Visible (VIS) face matching is challenging due to the significant domain gaps as well as a lack of sufficient data for cross-modality model training. To overcome this problem, we propose a novel method for paired NIR-VIS facial image generation. Specifically, we reconstruct 3D face shape and reflectance from a large 2D facial dataset and introduce a novel method of transforming the VIS reflectance to NIR reflectance. We then use a physically-based renderer to generate a vast, high-resolution and photorealistic dataset consisting of various poses and identities in the NIR and VIS spectra. Moreover, to facilitate the identity feature learning, we propose an IDentity-based Maximum Mean Discrepancy (ID-MMD) loss, which not only reduces the modality gap between NIR and VIS images at the domain level but encourages the network to focus on the identity features instead of facial details, such as poses and accessories. Extensive experiments conducted on four challenging NIR-VIS face recognition benchmarks demonstrate that the proposed method can achieve comparable performance with the state-of-the-art (SOTA) methods without requiring any existing NIR-VIS face recognition datasets. With slightly fine-tuning on the target NIR-VIS face recognition datasets, our method can significantly surpass the SOTA performance. Code and pretrained models are released under the insightface (https://github.com/deepinsight/insightface/tree/master/recognition).
translated by 谷歌翻译