Autonomous vehicles must often contend with conflicting planning requirements, e.g., safety and comfort could be at odds with each other if avoiding a collision calls for slamming the brakes. To resolve such conflicts, assigning importance ranking to rules (i.e., imposing a rule hierarchy) has been proposed, which, in turn, induces rankings on trajectories based on the importance of the rules they satisfy. On one hand, imposing rule hierarchies can enhance interpretability, but introduce combinatorial complexity to planning; while on the other hand, differentiable reward structures can be leveraged by modern gradient-based optimization tools, but are less interpretable and unintuitive to tune. In this paper, we present an approach to equivalently express rule hierarchies as differentiable reward structures amenable to modern gradient-based optimizers, thereby, achieving the best of both worlds. We achieve this by formulating rank-preserving reward functions that are monotonic in the rank of the trajectories induced by the rule hierarchy; i.e., higher ranked trajectories receive higher reward. Equipped with a rule hierarchy and its corresponding rank-preserving reward function, we develop a two-stage planner that can efficiently resolve conflicting planning requirements. We demonstrate that our approach can generate motion plans in ~7-10 Hz for various challenging road navigation and intersection negotiation scenarios.
translated by 谷歌翻译
在现代自治堆栈中,预测模块对于在其他移动代理的存在下计划动作至关重要。但是,预测模块的失败会误导下游规划师做出不安全的决定。确实,轨迹预测任务固有的高度不确定性可确保这种错误预测经常发生。由于需要提高自动驾驶汽车的安全而不受损害其性能的需求,我们开发了一个概率运行时监视器,该监视器检测到何时发生“有害”预测故障,即与任务相关的失败检测器。我们通过将轨迹预测错误传播到计划成本来推理其对AV的影响来实现这一目标。此外,我们的检测器还配备了假阳性和假阴性速率的性能度量,并允许进行无数据校准。在我们的实验中,我们将检测器与其他各种检测器进行了比较,发现我们的检测器在接收器操作员特征曲线下具有最高的面积。
translated by 谷歌翻译
为了实现安全的自动驾驶汽车(AV)操作,至关重要的是,AV的障碍检测模块可以可靠地检测出构成安全威胁的障碍物(即是安全至关重要的)。因此,希望对感知系统的评估指标捕获对象的安全性 - 临界性。不幸的是,现有的感知评估指标倾向于对物体做出强烈的假设,而忽略了代理之间的动态相互作用,因此不能准确地捕获现实中的安全风险。为了解决这些缺点,我们通过考虑自我车辆和现场障碍之间的闭环动态相互作用来引入互动障碍感知障碍检测评估度量指标。通过从最佳控制理论借用现有理论,即汉密尔顿 - 雅各比的可达性,我们提出了一种可构造``安全区域''的计算障碍方法:一个国家空间中的一个区域,该区域定义了安全 - 关键障碍为了定义安全目的的位置指标。我们提出的安全区已在数学上完成,并且可以轻松计算以反映各种安全要求。使用Nuscenes检测挑战排行榜的现成检测算法,我们证明我们的方法是计算轻量级,并且可以更好地捕获与基线方法更好地捕获关键的安全感知错误。
translated by 谷歌翻译
由于关键的至关重要的自动驾驶汽车(AV)将很快在我们的社会中普遍存在,因此最近在整个行业和学术界都提出了许多可信赖的AV部署的安全概念。然而,在适当的安全概念上达成共识仍然是一项艰巨的任务。在本文中,我们倡导使用汉密尔顿 - 雅各布(HJ)的可及性作为比较现有安全概念的统一数学框架,并通过该框架的元素提出了定制安全概念(从而将其适用性扩展到方案)与方案的方法,从而将其与方案相关。以数据驱动方式对代理行为的隐性期望。具体而言,我们表明(i)现有的主要安全概念可以嵌入到HJ可达性框架中,从而实现一种共同的语言来比较和对比建模假设,并且(ii)HJ可达性可以作为感应性偏见,以有效地理由,在一个学习环境,大约是两个关键但经常被忽视的安全方面:责任和上下文依赖性。
translated by 谷歌翻译
本文介绍了一个名为STLCG的技术,使用计算图计算信号时间逻辑(STL)公式的定量语义。 STLCG提供了一个平台,它可以将逻辑规范纳入从基于梯度的解决方案中受益的机器人问题。具体而言,STL是一种强大且表现力的正式语言,可以指定连续和混合系统产生的信号的空间和时间特性。 STL的定量语义提供了鲁棒性度量,即,信号满足或违反STL规范的量。在这项工作中,我们设计了一种系统方法,用于将STL鲁棒性公式转化为计算图形。通过这种表示,通过利用现成的自动差异化工具,我们能够通过STL稳健性公式有效地反向,因此可以实现具有许多基于梯度的方法的STL规范的自然且易于使用的STL规范集成。通过各种机器人应用的许多示例,我们证明STLCG是多功能的,计算效率,并且能够将人域知识纳入问题制定中。
translated by 谷歌翻译
The task of reconstructing 3D human motion has wideranging applications. The gold standard Motion capture (MoCap) systems are accurate but inaccessible to the general public due to their cost, hardware and space constraints. In contrast, monocular human mesh recovery (HMR) methods are much more accessible than MoCap as they take single-view videos as inputs. Replacing the multi-view Mo- Cap systems with a monocular HMR method would break the current barriers to collecting accurate 3D motion thus making exciting applications like motion analysis and motiondriven animation accessible to the general public. However, performance of existing HMR methods degrade when the video contains challenging and dynamic motion that is not in existing MoCap datasets used for training. This reduces its appeal as dynamic motion is frequently the target in 3D motion recovery in the aforementioned applications. Our study aims to bridge the gap between monocular HMR and multi-view MoCap systems by leveraging information shared across multiple video instances of the same action. We introduce the Neural Motion (NeMo) field. It is optimized to represent the underlying 3D motions across a set of videos of the same action. Empirically, we show that NeMo can recover 3D motion in sports using videos from the Penn Action dataset, where NeMo outperforms existing HMR methods in terms of 2D keypoint detection. To further validate NeMo using 3D metrics, we collected a small MoCap dataset mimicking actions in Penn Action,and show that NeMo achieves better 3D reconstruction compared to various baselines.
translated by 谷歌翻译
Neural compression offers a domain-agnostic approach to creating codecs for lossy or lossless compression via deep generative models. For sequence compression, however, most deep sequence models have costs that scale with the sequence length rather than the sequence complexity. In this work, we instead treat data sequences as observations from an underlying continuous-time process and learn how to efficiently discretize while retaining information about the full sequence. As a consequence of decoupling sequential information from its temporal discretization, our approach allows for greater compression rates and smaller computational complexity. Moreover, the continuous-time approach naturally allows us to decode at different time intervals. We empirically verify our approach on multiple domains involving compression of video and motion capture sequences, showing that our approaches can automatically achieve reductions in bit rates by learning how to discretize.
translated by 谷歌翻译
Spoken language understanding (SLU) tasks have been studied for many decades in the speech research community, but have not received as much attention as lower-level tasks like speech and speaker recognition. In particular, there are not nearly as many SLU task benchmarks, and many of the existing ones use data that is not freely available to all researchers. Recent work has begun to introduce such benchmark datasets for several tasks. In this work, we introduce several new annotated SLU benchmark tasks based on freely available speech data, which complement existing benchmarks and address gaps in the SLU evaluation landscape. We contribute four tasks: question answering and summarization involve inference over longer speech sequences; named entity localization addresses the speech-specific task of locating the targeted content in the signal; dialog act classification identifies the function of a given speech utterance. We follow the blueprint of the Spoken Language Understanding Evaluation (SLUE) benchmark suite. In order to facilitate the development of SLU models that leverage the success of pre-trained speech representations, we will be publishing for each task (i) annotations for a relatively small fine-tuning set, (ii) annotated development and test sets, and (iii) baseline models for easy reproducibility and comparisons. In this work, we present the details of data collection and annotation and the performance of the baseline models. We also perform sensitivity analysis of pipeline models' performance (speech recognizer + text model) to the speech recognition accuracy, using more than 20 state-of-the-art speech recognition models.
translated by 谷歌翻译
Self-supervised pre-trained transformers have improved the state of the art on a variety of speech tasks. Due to the quadratic time and space complexity of self-attention, they usually operate at the level of relatively short (e.g., utterance) segments. In this paper, we study the use of context, i.e., surrounding segments, during fine-tuning and propose a new approach called context-aware fine-tuning. We attach a context module on top of the last layer of a pre-trained model to encode the whole segment into a context embedding vector which is then used as an additional feature for the final prediction. During the fine-tuning stage, we introduce an auxiliary loss that encourages this context embedding vector to be similar to context vectors of surrounding segments. This allows the model to make predictions without access to these surrounding segments at inference time and requires only a tiny overhead compared to standard fine-tuned models. We evaluate the proposed approach using the SLUE and Librilight benchmarks for several downstream tasks: Automatic speech recognition (ASR), named entity recognition (NER), and sentiment analysis (SA). The results show that context-aware fine-tuning not only outperforms a standard fine-tuning baseline but also rivals a strong context injection baseline that uses neighboring speech segments during inference.
translated by 谷歌翻译
Recent neural compression methods have been based on the popular hyperprior framework. It relies on Scalar Quantization and offers a very strong compression performance. This contrasts from recent advances in image generation and representation learning, where Vector Quantization is more commonly employed. In this work, we attempt to bring these lines of research closer by revisiting vector quantization for image compression. We build upon the VQ-VAE framework and introduce several modifications. First, we replace the vanilla vector quantizer by a product quantizer. This intermediate solution between vector and scalar quantization allows for a much wider set of rate-distortion points: It implicitly defines high-quality quantizers that would otherwise require intractably large codebooks. Second, inspired by the success of Masked Image Modeling (MIM) in the context of self-supervised learning and generative image models, we propose a novel conditional entropy model which improves entropy coding by modelling the co-dependencies of the quantized latent codes. The resulting PQ-MIM model is surprisingly effective: its compression performance on par with recent hyperprior methods. It also outperforms HiFiC in terms of FID and KID metrics when optimized with perceptual losses (e.g. adversarial). Finally, since PQ-MIM is compatible with image generation frameworks, we show qualitatively that it can operate under a hybrid mode between compression and generation, with no further training or finetuning. As a result, we explore the extreme compression regime where an image is compressed into 200 bytes, i.e., less than a tweet.
translated by 谷歌翻译