With the advent of deep learning application on edge devices, researchers actively try to optimize their deployments on low-power and restricted memory devices. There are established compression method such as quantization, pruning, and architecture search that leverage commodity hardware. Apart from conventional compression algorithms, one may redesign the operations of deep learning models that lead to more efficient implementation. To this end, we propose EuclidNet, a compression method, designed to be implemented on hardware which replaces multiplication, $xw$, with Euclidean distance $(x-w)^2$. We show that EuclidNet is aligned with matrix multiplication and it can be used as a measure of similarity in case of convolutional layers. Furthermore, we show that under various transformations and noise scenarios, EuclidNet exhibits the same performance compared to the deep learning models designed with multiplication operations.
translated by 谷歌翻译
Recurrent neural networks (RNN) are the backbone of many text and speech applications. These architectures are typically made up of several computationally complex components such as; non-linear activation functions, normalization, bi-directional dependence and attention. In order to maintain good accuracy, these components are frequently run using full-precision floating-point computation, making them slow, inefficient and difficult to deploy on edge devices. In addition, the complex nature of these operations makes them challenging to quantize using standard quantization methods without a significant performance drop. We present a quantization-aware training method for obtaining a highly accurate integer-only recurrent neural network (iRNN). Our approach supports layer normalization, attention, and an adaptive piecewise linear (PWL) approximation of activation functions, to serve a wide range of state-of-the-art RNNs. The proposed method enables RNN-based language models to run on edge devices with $2\times$ improvement in runtime, and $4\times$ reduction in model size while maintaining similar accuracy as its full-precision counterpart.
translated by 谷歌翻译
基于变压器的模型用于实现各种深度学习任务的最新性能。由于基于变压器的模型具有大量参数,因此在下游任务上进行微调是计算密集型和饥饿的能量。此类型号的自动混合精液FP32/FP16微调以前已用于降低计算资源需求。但是,随着低位整数背面传播的最新进展,有可能进一步减少计算和记忆脚印。在这项工作中,我们探索了一种新颖的整数训练方法,该方法使用整数算术来进行正向传播和梯度计算,对基于变压器的模型中的线性,卷积,层和层和嵌入层的梯度计算。此外,我们研究了各种整数位宽度的效果,以找到基于变压器模型的整数微调所需的最小位宽度。我们使用整数层对流行的下游任务进行了微调和VIT模型。我们表明,16位整数模型与浮点基线性能匹配。将位宽度降低到10,我们观察到0.5平均得分下降。最后,将位宽度的进一步降低到8的平均得分下降为1.7分。
translated by 谷歌翻译
深度学习模型的计算复杂性不断增加,使他们在各种云和边缘平台上的培训和部署变得困难。用低位整数算术代替浮点算术是一种有希望的方法,可节省能量,记忆足迹和深度学习模型的延迟。因此,量化引起了近年来研究人员的注意。但是,没有详细研究使用整数数字形成功能齐全的整数训练管道,包括前进,后传播和随机梯度下降。我们的经验和数学结果表明,整数算术足以训练深度学习模型。与最近的建议不同,我们直接切换计算的数字表示。我们的新型训练方法形成了完全整数训练管道,与浮点相比,它不会改变损失和准确性的轨迹,也不需要任何特殊的超参数调整,分配调整或梯度剪辑。我们的实验结果表明,我们提出的方法在各种任务(包括视觉变压器),对象检测和语义分割等多种任务中有效。
translated by 谷歌翻译
Nesterov的加速梯度(AG)是一种流行的技术,优化包括两个组件的客观函数:凸损耗和惩罚功能。虽然AG方法对于凸面的惩罚表现良好,例如套索,但是当它适用于非核心惩罚时可能会出现收敛问题,例如苏尔州。最近的提议将Nesterov的AG方法推广到非渗透环境,但从未应用于稀疏统计学习问题。在运行所提出的算法之前,有几种超级参数。但是,目前没有明确的规则应该如何选择超参数。在本文中,我们考虑将该非核解AG算法应用于高维线性和逻辑稀疏学习问题,并根据复杂性上限提出超级参数设置以加速收敛。我们进一步建立了收敛速度,并为阻尼序列提出了一种简单且有用的限制。模拟研究表明,可以平均地进行收敛,比传统的ISTA算法的速度快得多。我们的实验还表明,在信号恢复方面,该方法通常优于当前最先进的方法。
translated by 谷歌翻译
Compared to regular cameras, Dynamic Vision Sensors or Event Cameras can output compact visual data based on a change in the intensity in each pixel location asynchronously. In this paper, we study the application of current image-based SLAM techniques to these novel sensors. To this end, the information in adaptively selected event windows is processed to form motion-compensated images. These images are then used to reconstruct the scene and estimate the 6-DOF pose of the camera. We also propose an inertial version of the event-only pipeline to assess its capabilities. We compare the results of different configurations of the proposed algorithm against the ground truth for sequences of two publicly available event datasets. We also compare the results of the proposed event-inertial pipeline with the state-of-the-art and show it can produce comparable or more accurate results provided the map estimate is reliable.
translated by 谷歌翻译
GTFLAT, as a game theory-based add-on, addresses an important research question: How can a federated learning algorithm achieve better performance and training efficiency by setting more effective adaptive weights for averaging in the model aggregation phase? The main objectives for the ideal method of answering the question are: (1) empowering federated learning algorithms to reach better performance in fewer communication rounds, notably in the face of heterogeneous scenarios, and last but not least, (2) being easy to use alongside the state-of-the-art federated learning algorithms as a new module. To this end, GTFLAT models the averaging task as a strategic game among active users. Then it proposes a systematic solution based on the population game and evolutionary dynamics to find the equilibrium. In contrast with existing approaches that impose the weights on the participants, GTFLAT concludes a self-enforcement agreement among clients in a way that none of them is motivated to deviate from it individually. The results reveal that, on average, using GTFLAT increases the top-1 test accuracy by 1.38%, while it needs 21.06% fewer communication rounds to reach the accuracy.
translated by 谷歌翻译
DeepAngle is a machine learning-based method to determine the contact angles of different phases in the tomography images of porous materials. Measurement of angles in 3--D needs to be done within the surface perpendicular to the angle planes, and it could become inaccurate when dealing with the discretized space of the image voxels. A computationally intensive solution is to correlate and vectorize all surfaces using an adaptable grid, and then measure the angles within the desired planes. On the contrary, the present study provides a rapid and low-cost technique powered by deep learning to estimate the interfacial angles directly from images. DeepAngle is tested on both synthetic and realistic images against the direct measurement technique and found to improve the r-squared by 5 to 16% while lowering the computational cost 20 times. This rapid method is especially applicable for processing large tomography data and time-resolved images, which is computationally intensive. The developed code and the dataset are available at an open repository on GitHub (https://www.github.com/ArashRabbani/DeepAngle).
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
The availability of Martian atmospheric data provided by several Martian missions broadened the opportunity to investigate and study the conditions of the Martian ionosphere. As such, ionospheric models play a crucial part in improving our understanding of ionospheric behavior in response to different spatial, temporal, and space weather conditions. This work represents an initial attempt to construct an electron density prediction model of the Martian ionosphere using machine learning. The model targets the ionosphere at solar zenith ranging from 70 to 90 degrees, and as such only utilizes observations from the Mars Global Surveyor mission. The performance of different machine learning methods was compared in terms of root mean square error, coefficient of determination, and mean absolute error. The bagged regression trees method performed best out of all the evaluated methods. Furthermore, the optimized bagged regression trees model outperformed other Martian ionosphere models from the literature (MIRI and NeMars) in finding the peak electron density value, and the peak density height in terms of root-mean-square error and mean absolute error.
translated by 谷歌翻译