Modern autonomous driving system is characterized as modular tasks in sequential order, i.e., perception, prediction and planning. As sensors and hardware get improved, there is trending popularity to devise a system that can perform a wide diversity of tasks to fulfill higher-level intelligence. Contemporary approaches resort to either deploying standalone models for individual tasks, or designing a multi-task paradigm with separate heads. These might suffer from accumulative error or negative transfer effect. Instead, we argue that a favorable algorithm framework should be devised and optimized in pursuit of the ultimate goal, i.e. planning of the self-driving-car. Oriented at this goal, we revisit the key components within perception and prediction. We analyze each module and prioritize the tasks hierarchically, such that all these tasks contribute to planning (the goal). To this end, we introduce Unified Autonomous Driving (UniAD), the first comprehensive framework up-to-date that incorporates full-stack driving tasks in one network. It is exquisitely devised to leverage advantages of each module, and provide complementary feature abstractions for agent interaction from a global perspective. Tasks are communicated with unified query design to facilitate each other toward planning. We instantiate UniAD on the challenging nuScenes benchmark. With extensive ablations, the effectiveness of using such a philosophy is proven to surpass previous state-of-the-arts by a large margin in all aspects. The full suite of codebase and models would be available to facilitate future research in the community.
translated by 谷歌翻译
End-to-end Speech Translation (E2E ST) aims to translate source speech into target translation without generating the intermediate transcript. However, existing approaches for E2E ST degrade considerably when only limited ST data are available. We observe that an ST model's performance strongly correlates with its embedding similarity from speech and transcript. In this paper, we propose Word-Aligned COntrastive learning (WACO), a novel method for few-shot speech-to-text translation. Our key idea is bridging word-level representations for both modalities via contrastive learning. We evaluate WACO and other methods on the MuST-C dataset, a widely used ST benchmark. Our experiments demonstrate that WACO outperforms the best baseline methods by 0.7-8.5 BLEU points with only 1-hour parallel data. Code is available at https://anonymous.4open.science/r/WACO .
translated by 谷歌翻译
How can we extend a pre-trained model to many language understanding tasks, without labeled or additional unlabeled data? Pre-trained language models (PLMs) have been effective for a wide range of NLP tasks. However, existing approaches either require fine-tuning on downstream labeled datasets or manually constructing proper prompts. In this paper, we propose nonparametric prompting PLM (NPPrompt) for fully zero-shot language understanding. Unlike previous methods, NPPrompt uses only pre-trained language models and does not require any labeled data or additional raw corpus for further fine-tuning, nor does it rely on humans to construct a comprehensive set of prompt label words. We evaluate NPPrompt against previous major few-shot and zero-shot learning methods on diverse NLP tasks: including text classification, text entailment, similar text retrieval, and paraphrasing. Experimental results demonstrate that our NPPrompt outperforms the previous best fully zero-shot method by big margins, with absolute gains of 12.8% in accuracy on text classification and 18.9% on the GLUE benchmark.
translated by 谷歌翻译
Understanding objects is a central building block of artificial intelligence, especially for embodied AI. Even though object recognition excels with deep learning, current machines still struggle to learn higher-level knowledge, e.g., what attributes an object has, and what can we do with an object. In this work, we propose a challenging Object Concept Learning (OCL) task to push the envelope of object understanding. It requires machines to reason out object affordances and simultaneously give the reason: what attributes make an object possesses these affordances. To support OCL, we build a densely annotated knowledge base including extensive labels for three levels of object concept (category, attribute, affordance), and the causal relations of three levels. By analyzing the causal structure of OCL, we present a baseline, Object Concept Reasoning Network (OCRN). It leverages causal intervention and concept instantiation to infer the three levels following their causal relations. In experiments, OCRN effectively infers the object knowledge while following the causalities well. Our data and code are available at https://mvig-rhos.com/ocl.
translated by 谷歌翻译
Modern supervised learning neural network models require a large amount of manually labeled data, which makes the construction of domain-specific knowledge graphs time-consuming and labor-intensive. In parallel, although there has been much research on named entity recognition and relation extraction based on distantly supervised learning, constructing a domain-specific knowledge graph from large collections of textual data without manual annotations is still an urgent problem to be solved. In response, we propose an integrated framework for adapting and re-learning knowledge graphs from one coarse domain (biomedical) to a finer-define domain (oncology). In this framework, we apply distant-supervision on cross-domain knowledge graph adaptation. Consequently, no manual data annotation is required to train the model. We introduce a novel iterative training strategy to facilitate the discovery of domain-specific named entities and triples. Experimental results indicate that the proposed framework can perform domain adaptation and construction of knowledge graph efficiently.
translated by 谷歌翻译
在拒绝的环境中进行搜索对于群体机器人来说是具有挑战性的,因为不允许GNSS,映射,数据共享和中央处理的帮助。但是,使用嗅觉和听觉像动物一样合作可能是改善群体合作的重要方法。在本文中,提出了一群自主机器人来探索拒绝环境的嗅觉审计算法算法(OA-BUG)。构建了一个模拟环境,以衡量OA-BUG的性能。使用OA-BUG的搜索任务覆盖范围可以达到96.93%,与类似的算法SGBA相比,最大的40.55%提高了40.55%。此外,在实际的群机器人上进行了实验,以证明OA-BUG的有效性。结果表明,OA-BUG可以在被拒绝的环境中改善群体机器人的性能。
translated by 谷歌翻译
RNA结构的确定和预测可以促进靶向RNA的药物开发和可用的共性元素设计。但是,由于RNA的固有结构灵活性,所有三种主流结构测定方法(X射线晶体学,NMR和Cryo-EM)在解决RNA结构时会遇到挑战,这导致已解决的RNA结构的稀缺性。计算预测方法作为实验技术的补充。但是,\ textit {de从头}的方法都不基于深度学习,因为可用的结构太少。取而代之的是,他们中的大多数采用了耗时的采样策略,而且它们的性能似乎达到了高原。在这项工作中,我们开发了第一种端到端的深度学习方法E2FOLD-3D,以准确执行\ textit {de de novo} RNA结构预测。提出了几个新的组件来克服数据稀缺性,例如完全不同的端到端管道,二级结构辅助自我鉴定和参数有效的骨干配方。此类设计在独立的,非重叠的RNA拼图测试数据集上进行了验证,并达到平均sub-4 \ aa {}根平方偏差,与最先进的方法相比,它表现出了优越的性能。有趣的是,它在预测RNA复杂结构时也可以取得令人鼓舞的结果,这是先前系统无法完成的壮举。当E2FOLD-3D与实验技术耦合时,RNA结构预测场可以大大提高。
translated by 谷歌翻译
培训语音翻译(ST)模型需要大量和高质量的数据集。必C是使用最广泛的ST基准数据集之一。它包含大约400个小时的语音转录翻译数据,用于八个翻译方向。该数据集在创建过程中通过了几个质量控制过滤器。但是,我们发现必须使用三个主要质量问题:音频文本未对准,不准确的翻译和不必要的演讲者的名字。这些数据质量问题对模型开发和评估有什么影响?在本文中,我们提出了一种自动方法,以英语 - 德语(EN-DE)翻译为例,以解决或过滤上述质量问题。我们的实验表明,ST模型在干净的测试集上的表现更好,并且在不同的测试集中,提出的模型的排名保持一致。此外,简单地从训练集中删除未对准的数据点并不会导致更好的ST模型。
translated by 谷歌翻译
腮腺肿瘤约占头颈肿瘤的2%至10%。术前肿瘤定位,鉴别诊断以及随后选择适当的腮腺肿瘤治疗方法。然而,这些肿瘤的相对稀有性和高度分散的组织类型使基于术前放射线学对这种肿瘤病变的细微差异诊断造成了未满足的需求。最近,深度学习方法发展迅速,尤其是变形金刚在计算机视觉中击败了传统的卷积神经网络。为计算机视觉任务提出了许多新的基于变压器的网络。在这项研究中,收集了多中心多模束MRI图像。使用了基于变压器的SWIN-UNET。将搅拌,T1和T2模态的MRI图像合并为三通道数据以训练网络。我们实现了对腮腺和肿瘤感兴趣区域的分割。测试集上的模型DSC为88.63%,MPA为99.31%,MIOU为83.99%,HD为3.04。然后在本文中设计了一系列比较实验,以进一步验证算法的分割性能。
translated by 谷歌翻译
本文调查了美国境内自动驾驶汽车进行的最后一英里交付的最终用户接受。总共向296名参与者介绍了有关该技术的信息,然后要求填写有关他们的看法的调查表,以评估他们有关接受的行为意图。采用了部分最小二乘风味(PLS-SEM)的结构方程模型来分析收集的数据。结果表明,该技术的有用性在最终用户接受决策中起着最大作用,随后是他人的影响,然后通过与技术互动而获得的享受。此外,对使用自动递送工具进行最后一英里交付的风险的看法导致接受程度减少。但是,大多数参与者并未认为使用该技术具有风险。本文总结了我们的发现对各个利益相关者的影响,并提出了这一研究领域的下一步。
translated by 谷歌翻译