估计具有有限样本的2个高维分布之间的发散的问题是各种领域的重要问题,例如机器学习。虽然以前的方法以中等维度数据执行良好,但它们的准确性开始在具有100多个二进制变量的情况下降低。因此,我们建议使用可分解模型来估算高维数据的分歧。这些允许我们将高维分布的估计密度分解成较低尺寸函数的产物。我们进行正式和实验分析,探讨在分歧估算的背景下使用可分解模型的性质。为此,我们凭经验展示使用来自最大似然估计器的可分解模型来估计Kullback-Leibler分歧,优于在可以从可用数据中学习高度和有用的可分解模型的情况下发散估计的现有方法。
translated by 谷歌翻译
语音扭曲是一个长期存在的问题,它降低了受过监督训练的语音处理模型的性能。现在是时候提高语音处理模型的鲁棒性,以在遇到语音扭曲时获得良好的性能,而不会伤害干净的语音上的原始表现。在这项工作中,我们建议通过域对抗训练(DAT)提高语音处理模型的鲁棒性。我们根据五个不同的语音处理任务的精湛框架进行了实验。如果我们并不总是对语音数据的失真类型有所了解,我们分析了二进制域和多域设置,其中前者将所有扭曲的语音视为一个域,而后者将不同的扭曲视为不同的域。与监督训练方法相反,我们在目标域中获得了有希望的结果,在这些目标域中,语音数据因不同的扭曲而扭曲,包括在测试过程中引入的新看不见的扭曲。
translated by 谷歌翻译
Building systems that achieve a deeper understanding of language is one of the central goals of natural language processing (NLP). Towards this goal, recent works have begun to train language models on narrative datasets which require extracting the most critical information by integrating across long contexts. However, it is still an open question whether these models are learning a deeper understanding of the text, or if the models are simply learning a heuristic to complete the task. This work investigates this further by turning to the one language processing system that truly understands complex language: the human brain. We show that training language models for deeper narrative understanding results in richer representations that have improved alignment to human brain activity. We further find that the improvements in brain alignment are larger for character names than for other discourse features, which indicates that these models are learning important narrative elements. Taken together, these results suggest that this type of training can indeed lead to deeper language understanding. These findings have consequences both for cognitive neuroscience by revealing some of the significant factors behind brain-NLP alignment, and for NLP by highlighting that understanding of long-range context can be improved beyond language modeling.
translated by 谷歌翻译
Pre-trained language models have achieved promising success in code retrieval tasks, where a natural language documentation query is given to find the most relevant existing code snippet. However, existing models focus only on optimizing the documentation code pairs by embedding them into latent space, without the association of external knowledge. In this paper, we propose a generation-augmented query expansion framework. Inspired by the human retrieval process - sketching an answer before searching, in this work, we utilize the powerful code generation model to benefit the code retrieval task. Specifically, we demonstrate that rather than merely retrieving the target code snippet according to the documentation query, it would be helpful to augment the documentation query with its generation counterpart - generated code snippets from the code generation model. To the best of our knowledge, this is the first attempt that leverages the code generation model to enhance the code retrieval task. We achieve new state-of-the-art results on the CodeSearchNet benchmark and surpass the baselines significantly.
translated by 谷歌翻译
Traffic accident prediction in driving videos aims to provide an early warning of the accident occurrence, and supports the decision making of safe driving systems. Previous works usually concentrate on the spatial-temporal correlation of object-level context, while they do not fit the inherent long-tailed data distribution well and are vulnerable to severe environmental change. In this work, we propose a Cognitive Accident Prediction (CAP) method that explicitly leverages human-inspired cognition of text description on the visual observation and the driver attention to facilitate model training. In particular, the text description provides a dense semantic description guidance for the primary context of the traffic scene, while the driver attention provides a traction to focus on the critical region closely correlating with safe driving. CAP is formulated by an attentive text-to-vision shift fusion module, an attentive scene context transfer module, and the driver attention guided accident prediction module. We leverage the attention mechanism in these modules to explore the core semantic cues for accident prediction. In order to train CAP, we extend an existing self-collected DADA-2000 dataset (with annotated driver attention for each frame) with further factual text descriptions for the visual observations before the accidents. Besides, we construct a new large-scale benchmark consisting of 11,727 in-the-wild accident videos with over 2.19 million frames (named as CAP-DATA) together with labeled fact-effect-reason-introspection description and temporal accident frame label. Based on extensive experiments, the superiority of CAP is validated compared with state-of-the-art approaches. The code, CAP-DATA, and all results will be released in \url{https://github.com/JWFanggit/LOTVS-CAP}.
translated by 谷歌翻译
Visual odometry is crucial for many robotic tasks such as autonomous exploration and path planning. Despite many progresses, existing methods are still not robust enough to dynamic illumination environments. In this paper, we present AirVO, an illumination-robust and accurate stereo visual odometry system based on point and line features. To be robust to illumination variation, we introduce the learning-based feature extraction and matching method and design a novel VO pipeline, including feature tracking, triangulation, key-frame selection, and graph optimization etc. We also employ long line features in the environment to improve the accuracy of the system. Different from the traditional line processing pipelines in visual odometry systems, we propose an illumination-robust line tracking method, where point feature tracking and distribution of point and line features are utilized to match lines. In the experiments, the proposed system is extensively evaluated in environments with dynamic illumination and the results show that it achieves superior performance to the state-of-the-art algorithms.
translated by 谷歌翻译
Solving real-world sequential manipulation tasks requires robots to have a repertoire of skills applicable to a wide range of circumstances. To acquire such skills using data-driven approaches, we need massive and diverse training data which is often labor-intensive and non-trivial to collect and curate. In this work, we introduce Active Task Randomization (ATR), an approach that learns visuomotor skills for sequential manipulation by automatically creating feasible and novel tasks in simulation. During training, our approach procedurally generates tasks using a graph-based task parameterization. To adaptively estimate the feasibility and novelty of sampled tasks, we develop a relational neural network that maps each task parameter into a compact embedding. We demonstrate that our approach can automatically create suitable tasks for efficiently training the skill policies to handle diverse scenarios with a variety of objects. We evaluate our method on simulated and real-world sequential manipulation tasks by composing the learned skills using a task planner. Compared to baseline methods, the skills learned using our approach consistently achieve better success rates.
translated by 谷歌翻译
常规的识别抑郁症的方法无法扩展,公众对心理健康的认识有限,尤其是在发展中国家。从最近的研究中可以明显看出,社交媒体有可能更涉及心理健康筛查。按时间顺序排列的大量第一人称叙事帖子可以在一段时间内为人们的思想,感觉,行为或情绪提供见解,从而更好地理解在线空间中反映的抑郁症状。在本文中,我们提出了SERCNN,该文章通过(1)从不同域中堆叠两个预处理的嵌入方式以及(2)将嵌入环境重新引入MLP分类器来改善用户表示。我们的Sercnn在最先进的基线和其他基线方面表现出色,在5倍的交叉验证设置中达到93.7%的精度。由于并非所有用户都共享相同级别的在线活动,因此我们介绍了固定观察窗口的概念,该窗口量化了预定义的帖子中的观察期。 Sercnn的精度非常出色,其精度与BERT模型相当,而参数数量却少98%,Sercnn的表现出色,其精度非常出色。我们的发现为在社交媒体上检测抑郁症的方向开辟了一个有希望的方向,并较少的推断帖子,以为具有成本效益和及时干预的解决方案。我们希望我们的工作能够使该研究领域在现有临床实践中更接近现实世界的采用。
translated by 谷歌翻译
通过一系列联邦举措和命令,美国政府一直在努力确保美国在AI中的领导。这些广泛的战略文件影响了美国空军美国部(DAF)等组织。DAF-MIT AI加速器是DAF和MIT之间的一项计划,以弥合AI研究人员与DAF任务要求之间的差距。DAF-MIT AI加速器支持的几个项目正在开发公共挑战问题,这些问题解决了许多联邦AI研究的重点。这些挑战是通过公开可用的大型AI-Ready数据集,激励开源解决方案,并为可以激发进一步研究的双重使用技术创建需求信号,来针对优先事项。在本文中,我们描述了正在开发的这些公共挑战以及它们的应用如何促进科学进步。
translated by 谷歌翻译
我们研究了与任何已经训练的分类器兼容的简单方法(OOD)图像检测,仅依靠其预测或学会的表示。当使用Resnet-50和Swin Transformer模型使用时,评估各种方法的OOD检测性能,我们找到了仅考虑学会表示的模型预测的方法,可以轻松地胜过模型的预测。基于我们的分析,我们主张在其他研究中忽略了一种死去的方法:仅作为OOD图像标记,其平均距离与他们最近的邻居的平均距离很大(在图像分类器的表示空间中,经过训练的图像分类器的空间分销数据)。
translated by 谷歌翻译