Spoken language understanding (SLU) tasks have been studied for many decades in the speech research community, but have not received as much attention as lower-level tasks like speech and speaker recognition. In particular, there are not nearly as many SLU task benchmarks, and many of the existing ones use data that is not freely available to all researchers. Recent work has begun to introduce such benchmark datasets for several tasks. In this work, we introduce several new annotated SLU benchmark tasks based on freely available speech data, which complement existing benchmarks and address gaps in the SLU evaluation landscape. We contribute four tasks: question answering and summarization involve inference over longer speech sequences; named entity localization addresses the speech-specific task of locating the targeted content in the signal; dialog act classification identifies the function of a given speech utterance. We follow the blueprint of the Spoken Language Understanding Evaluation (SLUE) benchmark suite. In order to facilitate the development of SLU models that leverage the success of pre-trained speech representations, we will be publishing for each task (i) annotations for a relatively small fine-tuning set, (ii) annotated development and test sets, and (iii) baseline models for easy reproducibility and comparisons. In this work, we present the details of data collection and annotation and the performance of the baseline models. We also perform sensitivity analysis of pipeline models' performance (speech recognizer + text model) to the speech recognition accuracy, using more than 20 state-of-the-art speech recognition models.
translated by 谷歌翻译
This study focuses on embodied agents that can follow natural language instructions to complete complex tasks in a visually-perceived environment. Existing methods rely on a large amount of (instruction, gold trajectory) pairs to learn a good policy. The high data cost and poor sample efficiency prevents the development of versatile agents that are capable of many tasks and can learn new tasks quickly. In this work, we propose a novel method, LLM-Planner, that harnesses the power of large language models (LLMs) such as GPT-3 to do few-shot planning for embodied agents. We further propose a simple but effective way to enhance LLMs with physical grounding to generate plans that are grounded in the current environment. Experiments on the ALFRED dataset show that our method can achieve very competitive few-shot performance, even outperforming several recent baselines that are trained using the full training data despite using less than 0.5% of paired training data. Existing methods can barely complete any task successfully under the same few-shot setting. Our work opens the door for developing versatile and sample-efficient embodied agents that can quickly learn many tasks.
translated by 谷歌翻译
精确分割牙齿并识别牙科网格模型上的相应解剖标签在计算机辅助性正畸治疗中是必不可少的。手动执行这两个任务是耗时,繁琐的,更重要的是,由于患者牙齿的异常和大规模差异,高度依赖于矫正者的经验。一些基于机器学习的方法已经设计和应用于正畸场,以自动分割牙科网格(例如,口腔扫描)。相比之下,牙齿地标定位的研究数量仍然有限。本文提出了一种基于网格深度学习(称为TS-MDL)的两级框架,用于联合牙齿标签和原始内部扫描的地标识别。我们的TS-MDL首先采用端到端\ EMPH {i} MeshsegNet方法(即,现有网格孔的变体,具有改进的精度和效率),以在下采样扫描上标记每个牙齿。由分割输出引导,我们的TS-MDL进一步选择原始网格上的每个牙齿的感兴趣区域(ROI),以构造开头的光重变量(即PINTNET-REG),用于回归相应的地标热插块。我们的TS-MDL在实际的数据集上进行了评估,显示了有希望的细分和本地化性能。具体而言,TS-MDL的第一阶段中的\ EMPH {i} Meshsegnet达到了0.964 \ PM0.054 $ 0.964 \ PM0.054 $的平均骰子相似度系数(DSC),显着优于原始的Meshsegnet。在第二阶段,PointNet-Reg实现了0.597 \ PM0.761 \,预测和地面真理之间的平均绝对误差(MAE),以66美元的地标,与地标检测的其他网络相比,比较优越。所有这些结果表明我们在临床实践中的TS-MDL潜在使用。
translated by 谷歌翻译
Intermediate features of a pre-trained model have been shown informative for making accurate predictions on downstream tasks, even if the model backbone is kept frozen. The key challenge is how to utilize these intermediate features given their gigantic amount. We propose visual query tuning (VQT), a simple yet effective approach to aggregate intermediate features of Vision Transformers. Through introducing a handful of learnable ``query'' tokens to each layer, VQT leverages the inner workings of Transformers to ``summarize'' rich intermediate features of each layer, which can then be used to train the prediction heads of downstream tasks. As VQT keeps the intermediate features intact and only learns to combine them, it enjoys memory efficiency in training, compared to many other parameter-efficient fine-tuning approaches that learn to adapt features and need back-propagation through the entire backbone. This also suggests the complementary role between VQT and those approaches in transfer learning. Empirically, VQT consistently surpasses the state-of-the-art approach that utilizes intermediate features for transfer learning and outperforms full fine-tuning in many cases. Compared to parameter-efficient approaches that adapt features, VQT achieves much higher accuracy under memory constraints. Most importantly, VQT is compatible with these approaches to attain even higher accuracy, making it a simple add-on to further boost transfer learning.
translated by 谷歌翻译
自动驾驶汽车必须能够可靠地处理不利的天气条件(例如,雪地)安全运行。在本文中,我们研究了以不利条件捕获的转动传感器输入(即图像)的想法,将其下游任务(例如,语义分割)可以达到高精度。先前的工作主要将其作为未配对的图像到图像翻译问题,因为缺乏在完全相同的相机姿势和语义布局下捕获的配对图像。虽然没有完美对准的图像,但可以轻松获得粗配上的图像。例如,许多人每天在好天气和不利的天气中驾驶相同的路线;因此,在近距离GPS位置捕获的图像可以形成一对。尽管来自重复遍历的数据不太可能捕获相同的前景对象,但我们认为它们提供了丰富的上下文信息来监督图像翻译模型。为此,我们提出了一个新颖的训练目标,利用了粗糙的图像对。我们表明,我们与一致的训练方案可提高更好的图像翻译质量和改进的下游任务,例如语义分割,单眼深度估计和视觉定位。
translated by 谷歌翻译
读取图像中文本的能力通常缺乏视觉和语言(V&L)模型。我们如何学习表现出强烈的场景文本理解(Stu)的V&L模型?在本文中,我们提出了Prestu,这是一种专门为场景文本理解而设计的简单预训练食谱。Prestu将简单的OCR感知预训练目标与带有现成的OCR信号的大型图像文本数据集结合在一起。我们从经验上证明了这一预训练目标对TextVQA,TextCaps,ST-VQA和Vizwiz-VQA的优越性。我们还研究了哪些因素会影响Stu性能,其中我们强调了在预训练期间图像分辨率和数据集量表的重要性。
translated by 谷歌翻译
由于大规模数据集的可用性,通常在特定位置和良好的天气条件下收集的大规模数据集,近年来,自动驾驶汽车的感知进展已加速。然而,为了达到高安全要求,这些感知系统必须在包括雪和雨在内的各种天气条件下进行稳健运行。在本文中,我们提出了一个新数据集,以通过新颖的数据收集过程启用强大的自动驾驶 - 在不同场景(Urban,Highway,乡村,校园),天气,雪,雨,阳光下,沿着15公里的路线反复记录数据),时间(白天/晚上)以及交通状况(行人,骑自行车的人和汽车)。该数据集包括来自摄像机和激光雷达传感器的图像和点云,以及高精度GPS/ins以在跨路线上建立对应关系。该数据集包括使用Amodal掩码捕获部分遮挡和3D边界框的道路和对象注释。我们通过分析基准在道路和对象,深度估计和3D对象检测中的性能来证明该数据集的独特性。重复的路线为对象发现,持续学习和异常检测打开了新的研究方向。链接到ITHACA365:https://ithaca365.mae.cornell.edu/
translated by 谷歌翻译
当源和目标域之间存在较大差异时,无监督域适应性的有效性会降低。通过利用逐渐从源到目标转移的其他未标记数据,逐渐的域适应(GDA)是减轻此问题的一种有希望的方法。通过依次沿“索引”中间域调整模型,GDA显着提高了整体适应性性能。但是,实际上,额外的未标记数据可能不会分离为中间域并正确索引,从而限制了GDA的适用性。在本文中,我们研究了如何在尚未可用时发现中间域的序列。具体而言,我们提出了一个粗到精细的框架,该框架从通过渐进域鉴别训练的粗域发现步骤开始。然后,这种粗糙的域序列通过新的周期矛盾损失进行了精细的索引步骤,这鼓励下一个中间域,以保留对当前中间域的足够判别知识。然后可以通过GDA算法使用所得的域序列。在GDA的基准数据集上,我们表明,我们将其命名为中间域标签(偶像)的方法可以导致与预定义的域序列相比,可相当甚至更好的适应性性能,使GDA更适合质量,使GDA更适用和强大域序列。代码可从https://github.com/hongyouc/idol获得。
translated by 谷歌翻译
在大多数有关联合学习(FL)的文献中,神经网络都是随机重量初始化的。在本文中,我们介绍了一项关于预训练对FL的影响的实证研究。具体而言,我们旨在调查当客户的分散数据是非IID时,预训练是否可以减轻急剧精度下降。我们专注于FedAvg,这是基本和最广泛使用的FL算法。我们发现,在非IID数据下,预培训确实在很大程度上缩小了FedAvg和集中学习之间的差距,但这并不是由于减轻了FedAvg的本地培训中众所周知的模型漂移问题。相反,预培训如何通过使FedAvg的全球聚合更加稳定来帮助FedAvg。当使用真实数据的预训练对于FL不可行时,我们提出了一种新型的方法,可以预先培训合成数据。在各种图像数据集(包括用于分割的一个)上,我们使用合成预训练的方法导致了显着的增益,这实质上是为扩大现实世界应用程序的联合学习而迈出的关键步骤。
translated by 谷歌翻译
我们研究了开发自主代理的问题,这些自主代理可以遵循人类的指示来推断和执行一系列行动以完成基础任务。近年来取得了重大进展,尤其是对于短范围的任务。但是,当涉及具有扩展动作序列的长匹马任务时,代理可以轻松忽略某些指令或陷入长长指令中间,并最终使任务失败。为了应对这一挑战,我们提出了一个基于模型的里程碑的任务跟踪器(M-Track),以指导代理商并监视其进度。具体而言,我们提出了一个里程碑构建器,该建筑商通过导航和交互里程碑标记指令,代理商需要逐步完成,以及一个系统地检查代理商当前里程碑的进度并确定何时继续进行下一个的里程碑检查器。在具有挑战性的Alfred数据集上,我们的M轨道在两个竞争基本模型中,未见成功率的相对成功率显着提高了33%和52%。
translated by 谷歌翻译