频谱感测允许认知无线电系统尽管存在严重干扰,但是尽管存在严重干扰,但是在存在相关信号。大多数现有的频谱传感技术使用具有某些假设的特定信号噪声模型并导出某些检测性能。为了处理这种不确定性,正在采用基于学习的方法,最近基于深度学习的工具已经变得流行。这里,我们提出了一种基于长短短期存储器(LSTM)的频谱感测的方法,这是深度学习网络(DLN)的关键元件。 LSTM的使用促进了从频谱数据中学习的隐式功能。使用若干特征,使用若干特征培训,使用Adalm Pluto的经验测试用后设置验证了所提出的传感技术的性能。测试用培训培训以获取使用FM进行的现实世界无线电广播的主要信号。实验数据表明,与当前频谱感测方法相比,我们的方法即使在低信噪比下,我们的方法也在检测和分类准确性方面表现良好。
translated by 谷歌翻译
t-SNE remains one of the most popular embedding techniques for visualizing high-dimensional data. Most standard packages of t-SNE, such as scikit-learn, use the Barnes-Hut t-SNE (BH t-SNE) algorithm for large datasets. However, existing CPU implementations of this algorithm are inefficient. In this work, we accelerate the BH t-SNE on CPUs via cache optimizations, SIMD, parallelizing sequential steps, and improving parallelization of multithreaded steps. Our implementation (Acc-t-SNE) is up to 261x and 4x faster than scikit-learn and the state-of-the-art BH t-SNE implementation from daal4py, respectively, on a 32-core Intel(R) Icelake cloud instance.
translated by 谷歌翻译
Targeted syntactic evaluations of language models ask whether models show stable preferences for syntactically acceptable content over minimal-pair unacceptable inputs. Most targeted syntactic evaluation datasets ask models to make these judgements with just a single context-free sentence as input. This does not match language models' training regime, in which input sentences are always highly contextualized by the surrounding corpus. This mismatch raises an important question: how robust are models' syntactic judgements in different contexts? In this paper, we investigate the stability of language models' performance on targeted syntactic evaluations as we vary properties of the input context: the length of the context, the types of syntactic phenomena it contains, and whether or not there are violations of grammaticality. We find that model judgements are generally robust when placed in randomly sampled linguistic contexts. However, they are substantially unstable for contexts containing syntactic structures matching those in the critical test content. Among all tested models (GPT-2 and five variants of OPT), we significantly improve models' judgements by providing contexts with matching syntactic structures, and conversely significantly worsen them using unacceptable contexts with matching but violated syntactic structures. This effect is amplified by the length of the context, except for unrelated inputs. We show that these changes in model performance are not explainable by simple features matching the context and the test inputs, such as lexical overlap and dependency overlap. This sensitivity to highly specific syntactic features of the context can only be explained by the models' implicit in-context learning abilities.
translated by 谷歌翻译
We introduce LaViLa, a new approach to learning video-language representations by leveraging Large Language Models (LLMs). We repurpose pre-trained LLMs to be conditioned on visual input, and finetune them to create automatic video narrators. Our auto-generated narrations offer a number of advantages, including dense coverage of long videos, better temporal synchronization of the visual information and text, and much higher diversity of text. The video-text embedding learned contrastively with these additional auto-generated narrations outperforms the previous state-of-the-art on multiple first-person and third-person video tasks, both in zero-shot and finetuned setups. Most notably, LaViLa obtains an absolute gain of 10.1% on EGTEA classification and 5.9% Epic-Kitchens-100 multi-instance retrieval benchmarks. Furthermore, LaViLa trained with only half the narrations from the Ego4D dataset outperforms baseline models trained on the full set, and shows positive scaling behavior on increasing pre-training data and model size.
translated by 谷歌翻译
Training Graph Neural Networks, on graphs containing billions of vertices and edges, at scale using minibatch sampling poses a key challenge: strong-scaling graphs and training examples results in lower compute and higher communication volume and potential performance loss. DistGNN-MB employs a novel Historical Embedding Cache combined with compute-communication overlap to address this challenge. On a 32-node (64-socket) cluster of $3^{rd}$ generation Intel Xeon Scalable Processors with 36 cores per socket, DistGNN-MB trains 3-layer GraphSAGE and GAT models on OGBN-Papers100M to convergence with epoch times of 2 seconds and 4.9 seconds, respectively, on 32 compute nodes. At this scale, DistGNN-MB trains GraphSAGE 5.2x faster than the widely-used DistDGL. DistGNN-MB trains GraphSAGE and GAT 10x and 17.2x faster, respectively, as compute nodes scale from 2 to 32.
translated by 谷歌翻译
Damage to the inferior frontal gyrus (Broca's area) can cause agrammatic aphasia wherein patients, although able to comprehend, lack the ability to form complete sentences. This inability leads to communication gaps which cause difficulties in their daily lives. The usage of assistive devices can help in mitigating these issues and enable the patients to communicate effectively. However, due to lack of large scale studies of linguistic deficits in aphasia, research on such assistive technology is relatively limited. In this work, we present two contributions that aim to re-initiate research and development in this field. Firstly, we propose a model that uses linguistic features from small scale studies on aphasia patients and generates large scale datasets of synthetic aphasic utterances from grammatically correct datasets. We show that the mean length of utterance, the noun/verb ratio, and the simple/complex sentence ratio of our synthetic datasets correspond to the reported features of aphasic speech. Further, we demonstrate how the synthetic datasets may be utilized to develop assistive devices for aphasia patients. The pre-trained T5 transformer is fine-tuned using the generated dataset to suggest 5 corrected sentences given an aphasic utterance as input. We evaluate the efficacy of the T5 model using the BLEU and cosine semantic similarity scores. Affirming results with BLEU score of 0.827/1.00 and semantic similarity of 0.904/1.00 were obtained. These results provide a strong foundation for the concept that a synthetic dataset based on small scale studies on aphasia can be used to develop effective assistive technology.
translated by 谷歌翻译
人们依靠新闻来了解世界各地正在发生的事情并告知他们的日常生活。在当今的世界中,当假新闻的扩散猖ramp时,拥有大规模且高质量的真实新闻文章来源,其中包含出版类别的信息对于学习真实新闻的自然语言语法和语义是有价值的。作为这项工作的一部分,我们提供了一个新闻类别数据集,其中包含从HuffPost获得的2012年至2018年的200K新闻头条,以及有用的元数据以实现各种NLP任务。在本文中,我们还从数据集中产生了一些新颖的见解,并描述了数据集的各种现有和潜在应用。
translated by 谷歌翻译
我们提出了一个基于强化的学习框架,用于自动发现在脂肪机器人群的任何初始配置中可用的模式。特别是,我们对脂肪机器人群中无碰撞收集和相互可见性的问题进行了建模,并发现使用我们的框架来解决它们的模式。我们表明,通过根据某些约束(例如相互可见性和安全接口)来塑造奖励信号,机器人可以发现无碰撞的轨迹,导致形成良好的聚集和可见性模式。
translated by 谷歌翻译
在此演示论文中,我们设计和原型Rhythmedge是一种低成本,基于深度学习的无接触系统,用于常规的HR监控应用。通过促进无接触性质,实时/离线操作,廉价和可用的传感组件以及计算设备,节奏对现有方法的好处。我们的Rhythmedge系统是可移植的,可以轻松部署,以在中等控制的室内或室外环境中可靠的人力资源估计。 Rhythmedge通过检测面部视频(远程光摄影学; RPPG)的血量变化来测量人力资源,并使用现成的市售资源可限制的边缘平台和摄像机进行即时评估。我们通过将Rhythmedge的可伸缩性,灵活性和兼容性部署到不同的体系结构的三个资源约束平台上(Nvidia Jetson Nano,Google Coral Development Board,Raspberry Pi)和三个异质摄像机,可与不同的体系结构进行部署,并证明了Rhythmedge的可伸缩性和兼容性。摄像头,动作摄像头和DSLR)。 Rhythmedge进一步存储纵向心血管信息,并为用户提供即时通知。我们通过分析其运行时,内存和功率使用情况来彻底测试三个边缘计算平台的原型稳定性,延迟和可行性。
translated by 谷歌翻译
移动操纵器投掷是一种有前途的方法,可以提高工厂动态操纵的灵活性和效率。其主要挑战是在一系列任务规格下有效地计划可行的投掷。我们分析了投掷问题,并表明可以将其简化为更简单的平面问题,从而大大降低了计算成本。使用数据分析和机器学习,我们构建了对象的倒飞行动力学和机器人的运动可行性的模型,该模型可以在给定目标位置查询的1 ms中投掷运动。由于我们方法的计算效率,我们表明,在执行任务执行期间受到干扰时,系统是自适应的,是通过即时进行重新启动以找出替代投掷而不是坚持原始计划。代码可在以下网址找到:https://github.com/liuuyangdh/mobile-throwing
translated by 谷歌翻译