最新技术用于机器学习(ML)的最先进的超低功耗嵌入式设备的进展允许新的产品类别,其关键功能使ML功能在微控制器上能够具有小于1 MW功耗(TINYML)。Tinyml通过在低功耗嵌入式设备上聚合和分析边缘的数据来提供唯一的解决方案。但是,我们最近只能在微控制器上运行ml,并且该领域仍处于初期,这意味着硬件,软件和研究正在变化非常迅速。因此,已经为不同的平台开发了许多TinyML框架,以便于部署ML模型并标准化该过程。因此,在本文中,我们专注于基准标记的两个流行框架:Tensorflow Lite Micro(TFLM)在STM32-Nucleof401上的Arduino Nano BLE和Cube Ai上的Tensorflow Lite Micro(TFLM),为特定应用提供标准化的框架选择标准。
translated by 谷歌翻译
近年来,无人驾驶航空公司(无人机)的扩散急剧增加。无人机可以以可靠且具有成本效益的方式完成复杂或危险的任务,但仍然受到功耗问题的限制,这对飞行持续时间和能源苛刻任务的完成构成了严重的限制。以能源有效的方式提供具有高级决策功能的无人机的可能性是非常有益的。在本文中,我们提出了一个实际的解决方案,对这个问题进行了深入学习的问题。开发系统将OpenMV微控制器集成到DJI Tello Micro Acial车辆(MAV)中。微控制器托管一组机器学习的推理工具,协作控制无人机的导航并完成给定的任务目标。这种方法的目标是利用TINYML的新机遇特征通过OpenMV,包括离线推断,低延迟,能效和数据安全性。该方法在实际应用程序上成功验证,该应用程序包括在拥挤环境中穿着保护面具的人们的船上检测。
translated by 谷歌翻译
System identification, also known as learning forward models, transfer functions, system dynamics, etc., has a long tradition both in science and engineering in different fields. Particularly, it is a recurring theme in Reinforcement Learning research, where forward models approximate the state transition function of a Markov Decision Process by learning a mapping function from current state and action to the next state. This problem is commonly defined as a Supervised Learning problem in a direct way. This common approach faces several difficulties due to the inherent complexities of the dynamics to learn, for example, delayed effects, high non-linearity, non-stationarity, partial observability and, more important, error accumulation when using bootstrapped predictions (predictions based on past predictions), over large time horizons. Here we explore the use of Reinforcement Learning in this problem. We elaborate on why and how this problem fits naturally and sound as a Reinforcement Learning problem, and present some experimental results that demonstrate RL is a promising technique to solve these kind of problems.
translated by 谷歌翻译
Wireless Sensor Network (WSN) applications reshape the trend of warehouse monitoring systems allowing them to track and locate massive numbers of logistic entities in real-time. To support the tasks, classic Radio Frequency (RF)-based localization approaches (e.g. triangulation and trilateration) confront challenges due to multi-path fading and signal loss in noisy warehouse environment. In this paper, we investigate machine learning methods using a new grid-based WSN platform called Sensor Floor that can overcome the issues. Sensor Floor consists of 345 nodes installed across the floor of our logistic research hall with dual-band RF and Inertial Measurement Unit (IMU) sensors. Our goal is to localize all logistic entities, for this study we use a mobile robot. We record distributed sensing measurements of Received Signal Strength Indicator (RSSI) and IMU values as the dataset and position tracking from Vicon system as the ground truth. The asynchronous collected data is pre-processed and trained using Random Forest and Convolutional Neural Network (CNN). The CNN model with regularization outperforms the Random Forest in terms of localization accuracy with aproximate 15 cm. Moreover, the CNN architecture can be configured flexibly depending on the scenario in the warehouse. The hardware, software and the CNN architecture of the Sensor Floor are open-source under https://github.com/FLW-TUDO/sensorfloor.
translated by 谷歌翻译
Graph neural networks have shown to learn effective node representations, enabling node-, link-, and graph-level inference. Conventional graph networks assume static relations between nodes, while relations between entities in a video often evolve over time, with nodes entering and exiting dynamically. In such temporally-dynamic graphs, a core problem is inferring the future state of spatio-temporal edges, which can constitute multiple types of relations. To address this problem, we propose MTD-GNN, a graph network for predicting temporally-dynamic edges for multiple types of relations. We propose a factorized spatio-temporal graph attention layer to learn dynamic node representations and present a multi-task edge prediction loss that models multiple relations simultaneously. The proposed architecture operates on top of scene graphs that we obtain from videos through object detection and spatio-temporal linking. Experimental evaluations on ActionGenome and CLEVRER show that modeling multiple relations in our temporally-dynamic graph network can be mutually beneficial, outperforming existing static and spatio-temporal graph neural networks, as well as state-of-the-art predicate classification methods.
translated by 谷歌翻译
Generative models have been very successful over the years and have received significant attention for synthetic data generation. As deep learning models are getting more and more complex, they require large amounts of data to perform accurately. In medical image analysis, such generative models play a crucial role as the available data is limited due to challenges related to data privacy, lack of data diversity, or uneven data distributions. In this paper, we present a method to generate brain tumor MRI images using generative adversarial networks. We have utilized StyleGAN2 with ADA methodology to generate high-quality brain MRI with tumors while using a significantly smaller amount of training data when compared to the existing approaches. We use three pre-trained models for transfer learning. Results demonstrate that the proposed method can learn the distributions of brain tumors. Furthermore, the model can generate high-quality synthetic brain MRI with a tumor that can limit the small sample size issues. The approach can addresses the limited data availability by generating realistic-looking brain MRI with tumors. The code is available at: ~\url{https://github.com/rizwanqureshi123/Brain-Tumor-Synthetic-Data}.
translated by 谷歌翻译
This paper presents an accurate, highly efficient, and learning-free method for large-scale odometry estimation using spinning radar, empirically found to generalize well across very diverse environments -- outdoors, from urban to woodland, and indoors in warehouses and mines - without changing parameters. Our method integrates motion compensation within a sweep with one-to-many scan registration that minimizes distances between nearby oriented surface points and mitigates outliers with a robust loss function. Extending our previous approach CFEAR, we present an in-depth investigation on a wider range of data sets, quantifying the importance of filtering, resolution, registration cost and loss functions, keyframe history, and motion compensation. We present a new solving strategy and configuration that overcomes previous issues with sparsity and bias, and improves our state-of-the-art by 38%, thus, surprisingly, outperforming radar SLAM and approaching lidar SLAM. The most accurate configuration achieves 1.09% error at 5Hz on the Oxford benchmark, and the fastest achieves 1.79% error at 160Hz.
translated by 谷歌翻译
自2016年成立以来,Alexa奖计划使数百名大学生能够通过Socialbot Grand Challenge探索和竞争以发展对话代理商。挑战的目的是建立能够与人类在流行主题上连贯而诱人的代理人20分钟,同时达到至少4.0/5.0的平均评分。但是,由于对话代理商试图帮助用户完成日益复杂的任务,因此需要新的对话AI技术和评估平台。成立于2021年的Alexa奖Taskbot Challenge建立在Socialbot Challenge的成功基础上,通过引入交互式协助人类进行现实世界烹饪和做自己动手做的任务的要求,同时同时使用语音和视觉方式。这项挑战要求TaskBots识别和理解用户的需求,识别和集成任务和域知识,并开发新的方式,不分散用户的注意力,而不必分散他们的任务,以及其他挑战。本文概述了Taskbot挑战赛,描述了使用Cobot Toolkit提供给团队提供的基础架构支持,并总结了参与团队以克服研究挑战所采取的方法。最后,它分析了比赛第一年的竞争任务机器人的性能。
translated by 谷歌翻译
目的:用脑电图(脑电图)测量的稳态视觉诱发电势(SSVEP),在脑部计算机界面(BCI)拼写中产生不错的信息传输速率(ITR)。但是,文献中当前高性能的SSVEP BCI拼写器需要针对每个新用户进行系统适应的最初冗长而累人的用户特定培训,包括使用脑电图实验,算法培训和校准的数据收集(所有这些都是在实际使用之前系统)。这阻碍了BCI的广泛使用。为了确保实用性,我们提出了一种基于深神经网络(DNN)合​​奏的高度新颖的目标识别方法,该方法不需要任何特定于用户的培训。方法:我们从先前进行的脑电图实验的参与者中利用已经存在的文献数据集来训练全球目标标识符DNN,然后对每个参与者进行微调。我们将这种微调DNN的合奏转移到新的用户实例中,根据参与者与新用户的统计相似性确定k最具代表性的DNN,并通过集合预测的加权组合来预测目标角色。结果:在两个大规模基准和β数据集上,我们的方法可实现令人印象深刻的155.51位/分钟和114.64位/分钟ITR。代码可用于可重复性:https://github.com/osmanberke/ensemble-fnns结论:拟议的方法在[0.2-1.0]中的所有刺激持续时间上的所有最新替代方案都显着优于[0.2-1.0]秒。两个数据集。意义:我们的合奏-DNN方法有可能在日常生活中促进BCI拼写者的实际广泛部署,因为我们提供了最高的性能,同时无需任何特定于用户的培训即可立即使用。
translated by 谷歌翻译
这项研究是有关阿拉伯历史文档的光学特征识别(OCR)的一系列研究的第二阶段,并研究了不同的建模程序如何与问题相互作用。第一项研究研究了变压器对我们定制的阿拉伯数据集的影响。首次研究的弊端之一是训练数据的规模,由于缺乏资源,我们的3000万张图像中仅15000张图像。另外,我们添加了一个图像增强层,时间和空间优化和后校正层,以帮助该模型预测正确的上下文。值得注意的是,我们提出了一种使用视觉变压器作为编码器的端到端文本识别方法,即BEIT和Vanilla Transformer作为解码器,消除了CNNs以进行特征提取并降低模型的复杂性。实验表明,我们的端到端模型优于卷积骨架。该模型的CER为4.46%。
translated by 谷歌翻译