We show for the first time that large-scale generative pretrained transformer (GPT) family models can be pruned to at least 50% sparsity in one-shot, without any retraining, at minimal loss of accuracy. This is achieved via a new pruning method called SparseGPT, specifically designed to work efficiently and accurately on massive GPT-family models. When executing SparseGPT on the largest available open-source models, OPT-175B and BLOOM-176B, we can reach 60% sparsity with negligible increase in perplexity: remarkably, more than 100 billion weights from these models can be ignored at inference time. SparseGPT generalizes to semi-structured (2:4 and 4:8) patterns, and is compatible with weight quantization approaches.
translated by 谷歌翻译
我们考虑在具有挑战性的训练后环境中,深度神经网络(DNN)的模型压缩问题,在该设置中,我们将获得精确的训练模型,并且必须仅基于少量校准输入数据而无需任何重新培训即可压缩它。鉴于新兴软件和硬件支持通过加速修剪和/或量化压缩的模型,并且已经针对两种压缩方法独立提出了良好的表现解决方案,因此该问题已变得流行。在本文中,我们引入了一个新的压缩框架,该框架涵盖了统一环境中的重量修剪和量化,时间和空间效率高,并且在现有的后训练方法的实际性能上大大改善。在技​​术层面上,我们的方法基于[Lecun,Denker和Solla,1990年]在现代DNN的规模上的经典最佳脑外科医生(OBS)框架的第一个精确实现,我们进一步扩展到覆盖范围。重量量化。这是通过一系列可能具有独立利益的算法开发来实现的。从实际的角度来看,我们的实验结果表明,它可以在现有后训练方法的压缩 - 准确性权衡方面显着改善,并且甚至可以在训练后进行修剪和量化的准确共同应用。
translated by 谷歌翻译
最近对深神经网络(DNN)效率的重点已导致了模型压缩方法的重要工作,其中重量修剪是最受欢迎的方法之一。同时,有快速增长的计算支持,以有效地执行通过修剪获得的非结构化模型。但是,大多数现有的修剪方法最小化仅剩余权重的数量,即模型的大小,而不是针对推理时间进行优化。我们通过引入SPDY来解决这一差距,SPDY是一种新的压缩方法,该方法会自动确定层次的稀疏性目标,可以在给定系统上实现所需的推理速度,同时最大程度地减少准确性损失。 SPDY由两种新技术组成:第一个是一种有效的动态编程算法,用于求解一组给定的层敏感性得分,以解决加速约束的层压缩问题;第二个是一个局部搜索程序,用于确定准确的层敏感性得分。跨流行视觉和语言模型的实验表明,SPDY可以保证相对于现有策略的恢复较高的准确性,无论是一次性和逐步修剪方案,并且与大多数现有的修剪方法兼容。我们还将方法扩展到了最近实施的修剪任务,几乎没有数据,在该数据中,我们在修剪GPU支持的2:4稀疏模式时实现了最著名的准确性恢复。
translated by 谷歌翻译
有效地近似损失函数的局部曲率信息是用于深神经网络的优化和压缩的关键工具。然而,大多数现有方法近似二阶信息具有高计算或存储成本,这可以限制其实用性。在这项工作中,我们调查矩阵,用于估计逆象征的矢量产品(IHVPS)的矩阵线性时间方法,因为当Hessian可以近似为乘语 - 一个矩阵的总和时,如Hessian的经典近似由经验丰富的Fisher矩阵。我们提出了两个新的算法作为称为M-FAC的框架的一部分:第一个算法朝着网络压缩量身定制,如果Hessian给出了M $等级的总和,则可以计算Dimension $ D $的IHVP。 ,使用$ O(DM ^ 2)$预压制,$ O(DM)$代价计算IHVP,并查询逆Hessian的任何单个元素的费用$ O(m)$。第二算法针对优化设置,我们希望在反向Hessian之间计算产品,估计在优化步骤的滑动窗口和给定梯度方向上,根据预先说明的SGD所需的梯度方向。我们为计算IHVP和OHVP和O(DM + M ^ 3)$ of $ o(dm + m ^ 2)$提供算法,以便从滑动窗口添加或删除任何渐变。这两种算法产生最先进的结果,用于网络修剪和相对于现有二阶方法的计算开销的优化。在[9]和[17]可用实现。
translated by 谷歌翻译
Neuromorphic systems require user-friendly software to support the design and optimization of experiments. In this work, we address this need by presenting our development of a machine learning-based modeling framework for the BrainScaleS-2 neuromorphic system. This work represents an improvement over previous efforts, which either focused on the matrix-multiplication mode of BrainScaleS-2 or lacked full automation. Our framework, called hxtorch.snn, enables the hardware-in-the-loop training of spiking neural networks within PyTorch, including support for auto differentiation in a fully-automated hardware experiment workflow. In addition, hxtorch.snn facilitates seamless transitions between emulating on hardware and simulating in software. We demonstrate the capabilities of hxtorch.snn on a classification task using the Yin-Yang dataset employing a gradient-based approach with surrogate gradients and densely sampled membrane observations from the BrainScaleS-2 hardware system.
translated by 谷歌翻译
A digital twin is defined as a virtual representation of a physical asset enabled through data and simulators for real-time prediction, optimization, monitoring, controlling, and improved decision-making. Unfortunately, the term remains vague and says little about its capability. Recently, the concept of capability level has been introduced to address this issue. Based on its capability, the concept states that a digital twin can be categorized on a scale from zero to five, referred to as standalone, descriptive, diagnostic, predictive, prescriptive, and autonomous, respectively. The current work introduces the concept in the context of the built environment. It demonstrates the concept by using a modern house as a use case. The house is equipped with an array of sensors that collect timeseries data regarding the internal state of the house. Together with physics-based and data-driven models, these data are used to develop digital twins at different capability levels demonstrated in virtual reality. The work, in addition to presenting a blueprint for developing digital twins, also provided future research directions to enhance the technology.
translated by 谷歌翻译
The concept of walkable urban development has gained increased attention due to its public health, economic, and environmental sustainability benefits. Unfortunately, land zoning and historic under-investment have resulted in spatial inequality in walkability and social inequality among residents. We tackle the problem of Walkability Optimization through the lens of combinatorial optimization. The task is to select locations in which additional amenities (e.g., grocery stores, schools, restaurants) can be allocated to improve resident access via walking while taking into account existing amenities and providing multiple options (e.g., for restaurants). To this end, we derive Mixed-Integer Linear Programming (MILP) and Constraint Programming (CP) models. Moreover, we show that the problem's objective function is submodular in special cases, which motivates an efficient greedy heuristic. We conduct a case study on 31 underserved neighborhoods in the City of Toronto, Canada. MILP finds the best solutions in most scenarios but does not scale well with network size. The greedy algorithm scales well and finds near-optimal solutions. Our empirical evaluation shows that neighbourhoods with low walkability have a great potential for transformation into pedestrian-friendly neighbourhoods by strategically placing new amenities. Allocating 3 additional grocery stores, schools, and restaurants can improve the "WalkScore" by more than 50 points (on a scale of 100) for 4 neighbourhoods and reduce the walking distances to amenities for 75% of all residential locations to 10 minutes for all amenity types. Our code and paper appendix are available at https://github.com/khalil-research/walkability.
translated by 谷歌翻译
As machine learning (ML) systems get adopted in more critical areas, it has become increasingly crucial to address the bias that could occur in these systems. Several fairness pre-processing algorithms are available to alleviate implicit biases during model training. These algorithms employ different concepts of fairness, often leading to conflicting strategies with consequential trade-offs between fairness and accuracy. In this work, we evaluate three popular fairness pre-processing algorithms and investigate the potential for combining all algorithms into a more robust pre-processing ensemble. We report on lessons learned that can help practitioners better select fairness algorithms for their models.
translated by 谷歌翻译
Federated Deep Learning frameworks can be used strategically to monitor Land Use locally and infer environmental impacts globally. Distributed data from across the world would be needed to build a global model for Land Use classification. The need for a Federated approach in this application domain would be to avoid transfer of data from distributed locations and save network bandwidth to reduce communication cost. We use a Federated UNet model for Semantic Segmentation of satellite and street view images. The novelty of the proposed architecture is the integration of Knowledge Distillation to reduce communication cost and response time. The accuracy obtained was above 95% and we also brought in a significant model compression to over 17 times and 62 times for street View and satellite images respectively. Our proposed framework has the potential to be a game-changer in real-time tracking of climate change across the planet.
translated by 谷歌翻译
We study the algorithm configuration (AC) problem, in which one seeks to find an optimal parameter configuration of a given target algorithm in an automated way. Recently, there has been significant progress in designing AC approaches that satisfy strong theoretical guarantees. However, a significant gap still remains between the practical performance of these approaches and state-of-the-art heuristic methods. To this end, we introduce AC-Band, a general approach for the AC problem based on multi-armed bandits that provides theoretical guarantees while exhibiting strong practical performance. We show that AC-Band requires significantly less computation time than other AC approaches providing theoretical guarantees while still yielding high-quality configurations.
translated by 谷歌翻译