Dataset Distillation (DD), a newly emerging field, aims at generating much smaller and high-quality synthetic datasets from large ones. Existing DD methods based on gradient matching achieve leading performance; however, they are extremely computationally intensive as they require continuously optimizing a dataset among thousands of randomly initialized models. In this paper, we assume that training the synthetic data with diverse models leads to better generalization performance. Thus we propose two \textbf{model augmentation} techniques, ~\ie using \textbf{early-stage models} and \textbf{weight perturbation} to learn an informative synthetic set with significantly reduced training cost. Extensive experiments demonstrate that our method achieves up to 20$\times$ speedup and comparable performance on par with state-of-the-art baseline methods.
translated by 谷歌翻译
During the deployment of deep neural networks (DNNs) on edge devices, many research efforts are devoted to the limited hardware resource. However, little attention is paid to the influence of dynamic power management. As edge devices typically only have a budget of energy with batteries (rather than almost unlimited energy support on servers or workstations), their dynamic power management often changes the execution frequency as in the widely-used dynamic voltage and frequency scaling (DVFS) technique. This leads to highly unstable inference speed performance, especially for computation-intensive DNN models, which can harm user experience and waste hardware resources. We firstly identify this problem and then propose All-in-One, a highly representative pruning framework to work with dynamic power management using DVFS. The framework can use only one set of model weights and soft masks (together with other auxiliary parameters of negligible storage) to represent multiple models of various pruning ratios. By re-configuring the model to the corresponding pruning ratio for a specific execution frequency (and voltage), we are able to achieve stable inference speed, i.e., keeping the difference in speed performance under various execution frequencies as small as possible. Our experiments demonstrate that our method not only achieves high accuracy for multiple models of different pruning ratios, but also reduces their variance of inference latency for various frequencies, with minimal memory consumption of only one model and one soft mask.
translated by 谷歌翻译
Over-parameterization of deep neural networks (DNNs) has shown high prediction accuracy for many applications. Although effective, the large number of parameters hinders its popularity on resource-limited devices and has an outsize environmental impact. Sparse training (using a fixed number of nonzero weights in each iteration) could significantly mitigate the training costs by reducing the model size. However, existing sparse training methods mainly use either random-based or greedy-based drop-and-grow strategies, resulting in local minimal and low accuracy. In this work, to assist explainable sparse training, we propose important weights Exploitation and coverage Exploration to characterize Dynamic Sparse Training (DST-EE), and provide quantitative analysis of these two metrics. We further design an acquisition function and provide the theoretical guarantees for the proposed method and clarify its convergence property. Experimental results show that sparse models (up to 98\% sparsity) obtained by our proposed method outperform the SOTA sparse training methods on a wide variety of deep learning tasks. On VGG-19 / CIFAR-100, ResNet-50 / CIFAR-10, ResNet-50 / CIFAR-100, our method has even higher accuracy than dense models. On ResNet-50 / ImageNet, the proposed method has up to 8.2\% accuracy improvement compared to SOTA sparse training methods.
translated by 谷歌翻译
Traffic state prediction in a transportation network is paramount for effective traffic operations and management, as well as informed user and system-level decision-making. However, long-term traffic prediction (beyond 30 minutes into the future) remains challenging in current research. In this work, we integrate the spatio-temporal dependencies in the transportation network from network modeling, together with the graph convolutional network (GCN) and graph attention network (GAT). To further tackle the dramatic computation and memory cost caused by the giant model size (i.e., number of weights) caused by multiple cascaded layers, we propose sparse training to mitigate the training cost, while preserving the prediction accuracy. It is a process of training using a fixed number of nonzero weights in each layer in each iteration. We consider the problem of long-term traffic speed forecasting for a real large-scale transportation network data from the California Department of Transportation (Caltrans) Performance Measurement System (PeMS). Experimental results show that the proposed GCN-STGT and GAT-STGT models achieve low prediction errors on short-, mid- and long-term prediction horizons, of 15, 30 and 45 minutes in duration, respectively. Using our sparse training, we could train from scratch with high sparsity (e.g., up to 90%), equivalent to 10 times floating point operations per second (FLOPs) reduction on computational cost using the same epochs as dense training, and arrive at a model with very small accuracy loss compared with the original dense training
translated by 谷歌翻译
深度学习(DL)的快速增长和部署目睹了新兴的隐私和安全问题。为了减轻这些问题,已经讨论了安全的多方计算(MPC),以实现隐私保护DL计算。在实践中,它们通常是在很高的计算和沟通开销中,并有可能禁止其在大规模系统中的受欢迎程度。两种正交研究趋势吸引了人们对安全深度学习的能源效率的巨大兴趣,即MPC比较方案的高架降低和硬件加速度。但是,他们要么达到较低的减少比率,因此由于计算和通信节省有限而遭受了高潜伏期,或者是渴望的,因为现有的作品主要集中在CPU和GPU等一般计算平台上。在这项工作中,作为第一次尝试,我们通过将加密构件构建块的硬件延迟整合到DNN损耗功能中,以实现高能量效率,开发了一个系统的polympcnet,以减少MPC比较协议和硬件加速的联合额外降低的系统框架Polympcnet。和安全保证。我们的关键设计原理不是在DNN进行良好训练之后(通过删除或删除某些非物质操作员)训练(通过删除或删除某些非物质操作员)之后检查模型敏感性,而是要准确地执行DNN设计中的假设 - 培训DNN既是DNN都硬件有效且安全,同时逃脱了当地的最小值和鞍点并保持高精度。更具体地说,我们提出了通过多项式激活初始化方法直接提出的加密硬件友好的可训练多项式激活功能,以替代昂贵的2P-RELU操作员。我们开发了一个密码硬件调度程序和现场可编程门阵列(FPGA)平台的相应性能模型。
translated by 谷歌翻译
共享连接和自动驾驶汽车(CAV)之间的信息从根本上改善了自动驾驶的协作对象检测的性能。但是,由于实际挑战,骑士仍然存在不确定性的对象检测,这将影响自动驾驶中的后来模块,例如计划和控制。因此,不确定性定量对于诸如CAV等安全至关重要系统至关重要。我们的工作是第一个估计协作对象检测的不确定性的工作。我们提出了一种新型的不确定性量化方法,称为Double-M量化,该方法通过直接建模到边界框的每个角落的多变量高斯分布来定制移动块引导(MBB)算法。我们的方法基于离线双M训练过程,通过一个推理通过了一个推理,同时捕获了认知的不确定性和差异不确定性。它可以与不同的协作对象检测器一起使用。通过对综合协作感知数据集进行的实验,我们表明,与最先进的不确定性量化方法相比,我们的双M方法在不确定性评分和3%的准确度上提高了4倍以上。我们的代码在https://coperception.github.io/double-m-quantification上公开。
translated by 谷歌翻译
随着实际图表的扩大,将部署具有数十亿个参数的较大GNN模型。此类模型中的高参数计数使图表的训练和推断昂贵且具有挑战性。为了降低GNN的计算和记忆成本,通常采用了输入图中的冗余节点和边缘等优化方法。但是,直接针对模型层稀疏的模型压缩,主要限于用于图像分类和对象检测等任务的传统深神网络(DNN)。在本文中,我们利用两种最先进的模型压缩方法(1)训练和修剪以及(2)稀疏训练GNN中的重量层。我们评估并比较了两种方法的效率,从精确性,训练稀疏性和现实世界图上的训练拖失lop方面。我们的实验结果表明,在IA-Email,Wiki-Talk和Stackoverflow数据集上,用于链接预测,稀疏训练和较低的训练拖失板可以使用火车和修剪方法达到可比的精度。在用于节点分类的大脑数据集上,稀疏训练使用较低的数字插槽(小于1/7的火车和修剪方法),并在极端模型的稀疏性下保留了更好的精度性能。
translated by 谷歌翻译
尖峰神经网络(SNN)因其高能量效率和分类性能的最新进展而引起了很多关注。但是,与传统的深度学习方法不同,对SNN对对抗性例子的鲁棒性的分析和研究仍然相对欠发达。在这项工作中,我们通过实验和分析三个重要的SNN安全属性来推进对抗机器学习的领域。首先,我们表明对SNN的成功白盒对抗性攻击高度依赖于潜在的替代梯度技术。其次,我们分析了SNN和其他最先进的体系结构(如视觉变压器和大型传输CNN)生成的对抗性示例的可传递性。我们证明,SNN并不经常被视觉变压器和某些类型的CNN产生的对抗典范所欺骗。最后,我们开发了一种新颖的白盒攻击,该攻击生成了能够同时欺骗SNN模型和非SNN模型的对抗性示例。我们的实验和分析是广泛而严格的,涵盖了两个数据集(CIFAR-10和CIFAR-100),五种不同的白色盒子攻击以及十二个不同的分类器模型。
translated by 谷歌翻译
变压器被认为是自2018年以来最重要的深度学习模型之一,部分原因是它建立了最先进的记录(SOTA)记录,并有可能取代现有的深神经网络(DNNS)。尽管取得了显着的胜利,但变压器模型的延长周转时间是公认的障碍。序列长度的多样性施加了其他计算开销,其中需要将输入零填充到批处理中的最大句子长度,以容纳并行计算平台。本文针对现场可编程的门阵列(FPGA),并提出了一个连贯的序列长度自适应算法 - 硬件与变压器加速度的共同设计。特别是,我们开发了一个适合硬件的稀疏注意操作员和长度意识的硬件资源调度算法。提出的稀疏注意操作员将基于注意力的模型的复杂性降低到线性复杂性,并减轻片外记忆流量。提出的长度感知资源硬件调度算法动态分配了硬件资源以填充管道插槽并消除了NLP任务的气泡。实验表明,与CPU和GPU实施相比,我们的设计准确度损失很小,并且具有80.2 $ \ times $和2.6 $ \ times $速度,并且比先进的GPU加速器高4 $ \ times $ $ $ \ times $通过Cublas Gemm优化。
translated by 谷歌翻译
物联网设备越来越多地通过神经网络模型实施,以启用智能应用程序。从环境环境中收集能源的能源收集(EH)技术是电池可为这些设备供电的有前途的替代方法,因为维护成本较低和能源的广泛可用性。但是,能量收割机提供的功率很低,并且具有不稳定性的固有缺点,因为它随环境环境而变化。本文提出了EVE,EVE是一种自动化机器学习(AUTOML)共同探索框架,以搜索具有共享权重的所需的多模型,以进行能源收集的物联网设备。这些共享模型显着降低了记忆足迹,具有不同级别的模型稀疏性,延迟和准确性,以适应环境变化。进一步开发了有效的实施实施体系结构,以有效地执行设备上的每个模型。提出了一种运行时模型提取算法,该算法在触发特定模型模式时以可忽略的开销检索单个模型。实验结果表明,EVE生成的神经网络模型平均比没有修剪和共享的基线模型快2.5倍倍权重。
translated by 谷歌翻译