分析分类模型性能对于机器学习从业人员来说是一项至关重要的任务。尽管从业者经常使用从混乱矩阵中得出的基于计数的指标,例如准确性,许多应用程序,例如天气预测,体育博彩或患者风险预测,但依赖分类器的预测概率而不是预测标签。在这些情况下,从业者关注的是产生校准模型,即输出反映真实分布的模型的模型。通常通过静态可靠性图在视觉上分析模型校准,但是,由于所需的强大聚合,传统的校准可视化可能会遭受各种缺陷。此外,基于计数的方法无法充分分析模型校准。我们提出校准,这是一个解决上述问题的交互性可靠性图。校准构造一个可靠性图,该图表可抵抗传统方法中的缺点,并允许进行交互式子组分析和实例级检查。我们通过在现实世界和合成数据上的用例中证明了校准的实用性。我们通过与常规分析模型校准的数据科学家进行思考实验的结果来进一步验证校准。
translated by 谷歌翻译
涉及环境声音分析的音频应用越来越多地使用通用音频表示(也称为嵌入)进行转移学习。最近,对音频表示形式(HEAR)的整体评估评估了关于19个不同任务的29个嵌入模型。但是,评估的有效性取决于给定数据集中已经捕获的变化。因此,对于给定的数据域,尚不清楚表示形式如何受到由无数麦克风范围和声学条件引起的变化的影响 - 通常称为通道效应。我们的目标是扩展听力,以评估不变性以在这项工作中的渠道效果。为此,我们通过向音频信号注入扰动来模仿通道效应,并用三个距离测量方法测量新(扰动)嵌入的变化,从而使评估域依赖但不依赖于任务依赖性。结合下游性能,它有助于我们对嵌入方式对频道效果的鲁棒性进行更明智的预测。我们评估了两个嵌入 - Yamnet和OpenL3在单声道(Urbansound8K)和多音(Sonyc-ust)Urban数据集上。我们表明,在这种无关的评估中,一个距离度量不足。尽管FR \'Echet音频距离(FAD)与下游任务中的性能下降趋势相关,但我们表明我们需要与其他距离一起研究时尚,以清楚地了解对该时尚的整体效果扰动。就嵌入性能而言,我们发现OpenL3比Yamnet更强大,Yamnet与听觉评估保持一致。
translated by 谷歌翻译
The task of reconstructing 3D human motion has wideranging applications. The gold standard Motion capture (MoCap) systems are accurate but inaccessible to the general public due to their cost, hardware and space constraints. In contrast, monocular human mesh recovery (HMR) methods are much more accessible than MoCap as they take single-view videos as inputs. Replacing the multi-view Mo- Cap systems with a monocular HMR method would break the current barriers to collecting accurate 3D motion thus making exciting applications like motion analysis and motiondriven animation accessible to the general public. However, performance of existing HMR methods degrade when the video contains challenging and dynamic motion that is not in existing MoCap datasets used for training. This reduces its appeal as dynamic motion is frequently the target in 3D motion recovery in the aforementioned applications. Our study aims to bridge the gap between monocular HMR and multi-view MoCap systems by leveraging information shared across multiple video instances of the same action. We introduce the Neural Motion (NeMo) field. It is optimized to represent the underlying 3D motions across a set of videos of the same action. Empirically, we show that NeMo can recover 3D motion in sports using videos from the Penn Action dataset, where NeMo outperforms existing HMR methods in terms of 2D keypoint detection. To further validate NeMo using 3D metrics, we collected a small MoCap dataset mimicking actions in Penn Action,and show that NeMo achieves better 3D reconstruction compared to various baselines.
translated by 谷歌翻译
Multi-object state estimation is a fundamental problem for robotic applications where a robot must interact with other moving objects. Typically, other objects' relevant state features are not directly observable, and must instead be inferred from observations. Particle filtering can perform such inference given approximate transition and observation models. However, these models are often unknown a priori, yielding a difficult parameter estimation problem since observations jointly carry transition and observation noise. In this work, we consider learning maximum-likelihood parameters using particle methods. Recent methods addressing this problem typically differentiate through time in a particle filter, which requires workarounds to the non-differentiable resampling step, that yield biased or high variance gradient estimates. By contrast, we exploit Fisher's identity to obtain a particle-based approximation of the score function (the gradient of the log likelihood) that yields a low variance estimate while only requiring stepwise differentiation through the transition and observation models. We apply our method to real data collected from autonomous vehicles (AVs) and show that it learns better models than existing techniques and is more stable in training, yielding an effective smoother for tracking the trajectories of vehicles around an AV.
translated by 谷歌翻译
The most widely studied explainable AI (XAI) approaches are unsound. This is the case with well-known model-agnostic explanation approaches, and it is also the case with approaches based on saliency maps. One solution is to consider intrinsic interpretability, which does not exhibit the drawback of unsoundness. Unfortunately, intrinsic interpretability can display unwieldy explanation redundancy. Formal explainability represents the alternative to these non-rigorous approaches, with one example being PI-explanations. Unfortunately, PI-explanations also exhibit important drawbacks, the most visible of which is arguably their size. Recently, it has been observed that the (absolute) rigor of PI-explanations can be traded off for a smaller explanation size, by computing the so-called relevant sets. Given some positive {\delta}, a set S of features is {\delta}-relevant if, when the features in S are fixed, the probability of getting the target class exceeds {\delta}. However, even for very simple classifiers, the complexity of computing relevant sets of features is prohibitive, with the decision problem being NPPP-complete for circuit-based classifiers. In contrast with earlier negative results, this paper investigates practical approaches for computing relevant sets for a number of widely used classifiers that include Decision Trees (DTs), Naive Bayes Classifiers (NBCs), and several families of classifiers obtained from propositional languages. Moreover, the paper shows that, in practice, and for these families of classifiers, relevant sets are easy to compute. Furthermore, the experiments confirm that succinct sets of relevant features can be obtained for the families of classifiers considered.
translated by 谷歌翻译
This study concerns the formulation and application of Bayesian optimal experimental design to symbolic discovery, which is the inference from observational data of predictive models taking general functional forms. We apply constrained first-order methods to optimize an appropriate selection criterion, using Hamiltonian Monte Carlo to sample from the prior. A step for computing the predictive distribution, involving convolution, is computed via either numerical integration, or via fast transform methods.
translated by 谷歌翻译
Transformers are powerful visual learners, in large part due to their conspicuous lack of manually-specified priors. This flexibility can be problematic in tasks that involve multiple-view geometry, due to the near-infinite possible variations in 3D shapes and viewpoints (requiring flexibility), and the precise nature of projective geometry (obeying rigid laws). To resolve this conundrum, we propose a "light touch" approach, guiding visual Transformers to learn multiple-view geometry but allowing them to break free when needed. We achieve this by using epipolar lines to guide the Transformer's cross-attention maps, penalizing attention values outside the epipolar lines and encouraging higher attention along these lines since they contain geometrically plausible matches. Unlike previous methods, our proposal does not require any camera pose information at test-time. We focus on pose-invariant object instance retrieval, where standard Transformer networks struggle, due to the large differences in viewpoint between query and retrieved images. Experimentally, our method outperforms state-of-the-art approaches at object retrieval, without needing pose information at test-time.
translated by 谷歌翻译
对于诸如搜索和救援之类的苛刻情况下,人形生物的部署,高度智能的决策和熟练的感觉运动技能。一个有前途的解决方案是通过远程操作通过互连机器人和人类来利用人类的实力。为了创建无缝的操作,本文提出了一个动态的远程组分框架,该框架将人类飞行员的步态与双皮亚机器人的步行同步。首先,我们介绍了一种方法,以从人类飞行员的垫脚行为中生成虚拟人类步行模型,该模型是机器人行走的参考。其次,步行参考和机器人行走的动力学通过向人类飞行员和机器人施加力来同步,以实现两个系统之间的动态相似性。这使得人类飞行员能够不断感知并取消步行参考和机器人之间的任何异步。得出机器人的一致步骤放置策略是通过步骤过渡来维持动态相似性的。使用我们的人机界面,我们证明了人类飞行员可以通过地位,步行和干扰拒绝实验实现模拟机器人的稳定和同步近距离运行。这项工作为将人类智力和反射转移到人形机器人方面提供了基本的一步。
translated by 谷歌翻译
Teleperation已成为全自动系统,以实现人类机器人的人体水平能力的替代解决方案。具体而言,全身控制的远程运行是指挥类人动物的有前途的无提手术策略,但需要更多的身体和心理努力。为了减轻这一限制,研究人员提出了共享控制方法,结合了机器人决策,以帮助人类完成低级任务,从而进一步减少了运营工作。然而,尚未探索用于全身级别的人型类人形端粒体的共享控制方法。在这项工作中,我们研究了全身反馈如何影响不同环境中不同共享控制方法的性能。提出了时间衍生的Sigmoid功能(TDSF),以产生障碍物的更直观的力反馈。进行了全面的人类实验,结果得出的结论是,力反馈增强了在不熟悉的环境中的全身端粒化表现,但可以在熟悉的环境中降低性能。通过触觉传达机器人的意图显示出进一步的改进,因为操作员可以将力反馈用于短途计划和视觉反馈进行长距离计划。
translated by 谷歌翻译
神经网络的一种众所周知的故障模式对应于高置信度错误的预测,尤其是对于训练分布有所不同的数据。这种不安全的行为限制了其适用性。为此,我们表明可以通过在其内部表示中添加约束来定义提供准确置信度的模型。也就是说,我们将类标签编码为固定的唯一二进制向量或类代码,并使用这些标签来在整个模型中强制执行依赖类的激活模式。结果预测因子被称为总激活分类器(TAC),而TAC用作基础分类器的附加组件,以指示预测的可靠性。给定数据实例,TAC切片中间表示分为不相交集,并将此类切片减少到标量中,从而产生激活曲线。在培训期间,将激活轮廓推向分配给给定培训实例的代码。在测试时,可以预测与最匹配示例激活曲线的代码相对应的类。从经验上讲,我们观察到激活模式及其相应代码之间的相似之处导致一种廉价的无监督方法来诱导歧视性置信度得分。也就是说,我们表明TAC至少与从现有模型中提取的最新置信度得分一样好,同时严格改善了模型在拒绝设置上的价值。还观察到TAC在多种类型的架构和数据模式上都很好地工作。
translated by 谷歌翻译