Transfer learning on edge is challenging due to on-device limited resources. Existing work addresses this issue by training a subset of parameters or adding model patches. Developed with inference in mind, Inverted Residual Blocks (IRBs) split a convolutional layer into depthwise and pointwise convolutions, leading to more stacking layers, e.g., convolution, normalization, and activation layers. Though they are efficient for inference, IRBs require that additional activation maps are stored in memory for training weights for convolution layers and scales for normalization layers. As a result, their high memory cost prohibits training IRBs on resource-limited edge devices, and making them unsuitable in the context of transfer learning. To address this issue, we present MobileTL, a memory and computationally efficient on-device transfer learning method for models built with IRBs. MobileTL trains the shifts for internal normalization layers to avoid storing activation maps for the backward pass. Also, MobileTL approximates the backward computation of the activation layer (e.g., Hard-Swish and ReLU6) as a signed function which enables storing a binary mask instead of activation maps for the backward pass. MobileTL fine-tunes a few top blocks (close to output) rather than propagating the gradient through the whole network to reduce the computation cost. Our method reduces memory usage by 46% and 53% for MobileNetV2 and V3 IRBs, respectively. For MobileNetV3, we observe a 36% reduction in floating-point operations (FLOPs) when fine-tuning 5 blocks, while only incurring a 0.6% accuracy reduction on CIFAR10. Extensive experiments on multiple datasets demonstrate that our method is Pareto-optimal (best accuracy under given hardware constraints) compared to prior work in transfer learning for edge devices.
translated by 谷歌翻译
随着机器学习和系统社区努力通过自定义深度神经网络(DNN)加速器,多样的精度或量化水平以及模型压缩技术来实现更高的能源效率,因此需要设计空间探索框架,以结合量化意识的处理。在具有准确和快速的功率,性能和区域模型的同时,进入加速器设计空间。在这项工作中,我们提出了Quidam,这是一种高度参数化的量化量化DNN加速器和模型共探索框架。我们的框架可以促进对DNN加速器设计空间探索的未来研究,以提供各种设计选择,例如位精度,处理元素类型,处理元素的刮擦大小,全局缓冲区大小,总处理元素的数量和DNN配置。我们的结果表明,不同的精确度和处理元素类型会导致每个区域和能量性能方面的显着差异。具体而言,我们的框架标识了广泛的设计点,其中每个面积和能量的性能分别差异超过5倍和35倍。通过拟议的框架,我们表明,与最佳基于INT16的实施相比,轻巧的处理元素可在准确性结果上实现,每个区域的性能和能源改善高达5.7倍。最后,由于预先特征的功率,性能和区域模型的效率,Quidam可以将设计勘探过程加快3-4个数量级,因为它消除了每种设计的昂贵合成和表征的需求。
translated by 谷歌翻译
嵌入式机器学习(ML)系统现在已成为部署ML服务任务的主要平台,预计对于培训ML模型而言非常重要。随之而来的是,在严格的内存约束下,总体高效部署,尤其是低功率和高吞吐量实现的挑战。在这种情况下,与常规SRAM相比,由于其非挥发性,较高的细胞密度和可伸缩性特征,STT-MRAM和SOT-MRAM等非易失性记忆(NVM)技术具有显着优势。虽然先前的工作已经调查了NVM对通用应用的几种架构含义,但在这项工作中,我们提出了DeepNVM ++,这是一个综合框架,用于表征,模型和分析基于NVM的GPU架构中的基于NVM的CACHES,通过结合技术特异性的技术应用程序(DL)应用程序(DL)应用程序。电路级模型和各种DL工作负载的实际内存行为。 DEEPNVM ++依赖于使用常规SRAM和新兴STT-MRAM和SOT-MRAM Technologies实施的最后级别缓存的ISO容量和ISO区域性能和能量模型。在ISO容量的情况下,与常规的SRAM相比,STT-MRAM和SOT-MRAM可提供高达3.8倍和4.7倍的能量延迟产品(EDP)的降低以及2.4倍和2.8倍面积。在ISO-AREA假设下,STT-MRAM和SOT-MRAM可提供高达2.2倍和2.4倍的EDP降低,并且与SRAM相比,分别可容纳2.3倍和3.3倍的缓存能力。我们还执行可伸缩性分析,并表明与大型缓存能力相比,STT-MRAM和SOT-MRAM与SRAM相比实现了EDP的降低。 DEEPNVM ++在STT-/SOT-MRAM技术上进行了证明,可用于DL应用中GPU中最后一级缓存的任何NVM技术的表征,建模和分析。
translated by 谷歌翻译
机器学习(ML)进入了移动时代,其中将大量的ML型号部署在边缘设备上。但是,在边缘设备上运行常见的ML模型可能会从计算中产生过多的热量,从而迫使该设备“减速”以防止过热,这一现象称为热节流。本文研究了热门节流器对手机的影响:当它发生时,CPU时钟频率会降低,并且模型推断潜伏期可能会大大增加。这种不愉快的不一致行为对用户体验产生了重大负面影响,但长期以来一直被忽略。为了应对热节流,我们建议利用具有共享权重的动态网络,并根据其热模型在系统即将到达油门时,根据其热模型无缝地在大型和小型ML模型之间移动。随着提出的动态变化,应用程序始终运行,而不会出现CPU时钟频率降解和延迟增加。此外,我们还研究了部署动态转移时所产生的准确性,并表明我们的方法在模型延迟和模型准确性之间提供了合理的权衡。
translated by 谷歌翻译
最近,自我监督的蒙面自动编码器(MAE)因其令人印象深刻的表示能力而引起了前所未有的关注。但是,借口任务是掩盖的图像建模(MIM),重建缺失的本地贴片,缺乏对图像的全局理解。本文通过添加有监督的分类部门将MAE扩展到了完全监督的环境,从而使Mae可以从Golden Labels中有效地学习全球功能。所提出的监督MAE(Supmae)仅利用图像贴片的可见子集进行分类,这与使用所有图像贴片的标准监督预训练不同。通过实验,我们证明了Supmae不仅更有效地训练,而且还学会了更健壮和可转移的功能。具体而言,Supmae在使用VIT-B/16模型的ImageNet上评估时仅使用30%的计算来实现MAE的可比性。 Supmae对ImageNet变体的鲁棒性和转移学习绩效优于MAE和标准监督前培训对手。代码将公开可用。
translated by 谷歌翻译
The monograph summarizes and analyzes the current state of development of computer and mathematical simulation and modeling, the automation of management processes, the use of information technologies in education, the design of information systems and software complexes, the development of computer telecommunication networks and technologies most areas that are united by the term Industry 4.0
translated by 谷歌翻译
Despite its importance for federated learning, continuous learning and many other applications, on-device training remains an open problem for EdgeAI. The problem stems from the large number of operations (e.g., floating point multiplications and additions) and memory consumption required during training by the back-propagation algorithm. Consequently, in this paper, we propose a new gradient filtering approach which enables on-device DNN model training. More precisely, our approach creates a special structure with fewer unique elements in the gradient map, thus significantly reducing the computational complexity and memory consumption of back propagation during training. Extensive experiments on image classification and semantic segmentation with multiple DNN models (e.g., MobileNet, DeepLabV3, UPerNet) and devices (e.g., Raspberry Pi and Jetson Nano) demonstrate the effectiveness and wide applicability of our approach. For example, compared to SOTA, we achieve up to 19$\times$ speedup and 77.1% memory savings on ImageNet classification with only 0.1% accuracy loss. Finally, our method is easy to implement and deploy; over 20$\times$ speedup and 90% energy savings have been observed compared to highly optimized baselines in MKLDNN and CUDNN on NVIDIA Jetson Nano. Consequently, our approach opens up a new direction of research with a huge potential for on-device training.
translated by 谷歌翻译
This short report reviews the current state of the research and methodology on theoretical and practical aspects of Artificial Neural Networks (ANN). It was prepared to gather state-of-the-art knowledge needed to construct complex, hypercomplex and fuzzy neural networks. The report reflects the individual interests of the authors and, by now means, cannot be treated as a comprehensive review of the ANN discipline. Considering the fast development of this field, it is currently impossible to do a detailed review of a considerable number of pages. The report is an outcome of the Project 'The Strategic Research Partnership for the mathematical aspects of complex, hypercomplex and fuzzy neural networks' meeting at the University of Warmia and Mazury in Olsztyn, Poland, organized in September 2022.
translated by 谷歌翻译
Using robots in educational contexts has already shown to be beneficial for a student's learning and social behaviour. For levitating them to the next level of providing more effective and human-like tutoring, the ability to adapt to the user and to express proactivity is fundamental. By acting proactively, intelligent robotic tutors anticipate possible situations where problems for the student may arise and act in advance for preventing negative outcomes. Still, the decisions of when and how to behave proactively are open questions. Therefore, this paper deals with the investigation of how the student's cognitive-affective states can be used by a robotic tutor for triggering proactive tutoring dialogue. In doing so, it is aimed to improve the learning experience. For this reason, a concept learning task scenario was observed where a robotic assistant proactively helped when negative user states were detected. In a learning task, the user's states of frustration and confusion were deemed to have negative effects on the outcome of the task and were used to trigger proactive behaviour. In an empirical user study with 40 undergraduate and doctoral students, we studied whether the initiation of proactive behaviour after the detection of signs of confusion and frustration improves the student's concentration and trust in the agent. Additionally, we investigated which level of proactive dialogue is useful for promoting the student's concentration and trust. The results show that high proactive behaviour harms trust, especially when triggered during negative cognitive-affective states but contributes to keeping the student focused on the task when triggered in these states. Based on our study results, we further discuss future steps for improving the proactive assistance of robotic tutoring systems.
translated by 谷歌翻译
Sunquakes are seismic emissions visible on the solar surface, associated with some solar flares. Although discovered in 1998, they have only recently become a more commonly detected phenomenon. Despite the availability of several manual detection guidelines, to our knowledge, the astrophysical data produced for sunquakes is new to the field of Machine Learning. Detecting sunquakes is a daunting task for human operators and this work aims to ease and, if possible, to improve their detection. Thus, we introduce a dataset constructed from acoustic egression-power maps of solar active regions obtained for Solar Cycles 23 and 24 using the holography method. We then present a pedagogical approach to the application of machine learning representation methods for sunquake detection using AutoEncoders, Contrastive Learning, Object Detection and recurrent techniques, which we enhance by introducing several custom domain-specific data augmentation transformations. We address the main challenges of the automated sunquake detection task, namely the very high noise patterns in and outside the active region shadow and the extreme class imbalance given by the limited number of frames that present sunquake signatures. With our trained models, we find temporal and spatial locations of peculiar acoustic emission and qualitatively associate them to eruptive and high energy emission. While noting that these models are still in a prototype stage and there is much room for improvement in metrics and bias levels, we hypothesize that their agreement on example use cases has the potential to enable detection of weak solar acoustic manifestations.
translated by 谷歌翻译