分析分类模型性能对于机器学习从业人员来说是一项至关重要的任务。尽管从业者经常使用从混乱矩阵中得出的基于计数的指标,例如准确性,许多应用程序,例如天气预测,体育博彩或患者风险预测,但依赖分类器的预测概率而不是预测标签。在这些情况下,从业者关注的是产生校准模型,即输出反映真实分布的模型的模型。通常通过静态可靠性图在视觉上分析模型校准,但是,由于所需的强大聚合,传统的校准可视化可能会遭受各种缺陷。此外,基于计数的方法无法充分分析模型校准。我们提出校准,这是一个解决上述问题的交互性可靠性图。校准构造一个可靠性图,该图表可抵抗传统方法中的缺点,并允许进行交互式子组分析和实例级检查。我们通过在现实世界和合成数据上的用例中证明了校准的实用性。我们通过与常规分析模型校准的数据科学家进行思考实验的结果来进一步验证校准。
translated by 谷歌翻译
可解释的人工智能(XAI)方法缺乏地面真理。代替方法,方法开发人员依靠公理来确定其解释行为的理想特性。对于需要解释性的机器学习的高利益使用,因此依靠公理作为实现或使用不足是不足以实现理想的。结果,对验证XAI方法的性能进行了积极的研究。在依赖XAI的域中,对验证的需求特别放大。一项消融研究,经常用于评估其效用并在某种程度上评估其效用的程序。通过在重要性等级顺序上扰动输入变量,目标是评估模型性能的敏感性。扰动重要变量应与模型能力度量的降低相关,而不是扰动不太重要的特征。尽管意图很明确,但实际实施细节尚未针对表格数据进行严格研究。使用五个数据集,三种XAI方法,四个基线和三个扰动,我们的目的是表明1)不同的扰动和添加简单的护栏如何有助于避免可能有缺陷的结论,2)分类变量的处理是如何在两个帖子中都重要的考虑因素。 - HOC解释性和消融研究,以及3)如何识别XAI方法的有用基准,以及用于消融研究的可行扰动。
translated by 谷歌翻译
本地解释性方法 - 由于需要从业者将其模型输出合理化,因此寻求为每次预测产生解释的人越来越普遍。然而,比较本地解释性方法很难,因为它们每个都会在各种尺度和尺寸中产生输出。此外,由于一些可解释性方法的随机性质,可以不同地运行方法以产生给定观察的矛盾解释。在本文中,我们提出了一种基于拓扑的框架来从一组本地解释中提取简化的表示。我们通过首先为标量函数设计解释空间和模型预测之间的关系来实现。然后,我们计算这个功能的拓扑骨架。这种拓扑骨架作为这样的功能的签名,我们用于比较不同的解释方法。我们证明我们的框架不仅可以可靠地识别可解释性技术之间的差异,而且提供稳定的表示。然后,我们展示了我们的框架如何用于标识本地解释性方法的适当参数。我们的框架很简单,不需要复杂的优化,并且可以广泛应用于大多数本地解释方法。我们认为,我们的方法的实用性和多功能性将有助于促进基于拓扑的方法作为理解和比较解释方法的工具。
translated by 谷歌翻译
反事实说明代表了对数据样本的最小变化,其改变其预测分类,通常是从不利的初始类到所需的目标类别。反事实可以帮助回答问题,例如“需要更改此申请以获得贷款的需要?”。一些最近提出的反事实的方法涉及“合理的”反事实和方法的不同定义。然而,许多这些方法是计算密集的,并提供不符合的解释。在这里,我们介绍了锐利的程序,这是一个用于通过创建分类为目标类的输入的投影版本来启动的二进制分类方法。然后在输入及其投影之间的插值线上的潜在空间中生成反事实候选者。然后,我们展示了我们的框架通过使用学习的陈述将样本的核心特征转化为其反事实。此外,我们表明Strappooter在表格和图像数据集上跨越普通质量指标具有竞争力,同时在现实主义测量中的两个可比方法和擅长的级别,使其适用于需要及时解释的高速机器学习应用。
translated by 谷歌翻译
Remote sensing imagery provides comprehensive views of the Earth, where different sensors collect complementary data at different spatial scales. Large, pretrained models are commonly finetuned with imagery that is heavily augmented to mimic different conditions and scales, with the resulting models used for various tasks with imagery from a range of spatial scales. Such models overlook scale-specific information in the data. In this paper, we present Scale-MAE, a pretraining method that explicitly learns relationships between data at different, known scales throughout the pretraining process. Scale-MAE pretrains a network by masking an input image at a known input scale, where the area of the Earth covered by the image determines the scale of the ViT positional encoding, not the image resolution. Scale-MAE encodes the masked image with a standard ViT backbone, and then decodes the masked image through a bandpass filter to reconstruct low/high frequency images at lower/higher scales. We find that tasking the network with reconstructing both low/high frequency images leads to robust multiscale representations for remote sensing imagery. Scale-MAE achieves an average of a $5.0\%$ non-parametric kNN classification improvement across eight remote sensing datasets compared to current state-of-the-art and obtains a $0.9$ mIoU to $3.8$ mIoU improvement on the SpaceNet building segmentation transfer task for a range of evaluation scales.
translated by 谷歌翻译
Traditional screening practices for anxiety and depression pose an impediment to monitoring and treating these conditions effectively. However, recent advances in NLP and speech modelling allow textual, acoustic, and hand-crafted language-based features to jointly form the basis of future mental health screening and condition detection. Speech is a rich and readily available source of insight into an individual's cognitive state and by leveraging different aspects of speech, we can develop new digital biomarkers for depression and anxiety. To this end, we propose a multi-modal system for the screening of depression and anxiety from self-administered speech tasks. The proposed model integrates deep-learned features from audio and text, as well as hand-crafted features that are informed by clinically-validated domain knowledge. We find that augmenting hand-crafted features with deep-learned features improves our overall classification F1 score comparing to a baseline of hand-crafted features alone from 0.58 to 0.63 for depression and from 0.54 to 0.57 for anxiety. The findings of our work suggest that speech-based biomarkers for depression and anxiety hold significant promise in the future of digital health.
translated by 谷歌翻译
This paper addresses the kinodynamic motion planning for non-holonomic robots in dynamic environments with both static and dynamic obstacles -- a challenging problem that lacks a universal solution yet. One of the promising approaches to solve it is decomposing the problem into the smaller sub problems and combining the local solutions into the global one. The crux of any planning method for non-holonomic robots is the generation of motion primitives that generates solutions to local planning sub-problems. In this work we introduce a novel learnable steering function (policy), which takes into account kinodynamic constraints of the robot and both static and dynamic obstacles. This policy is efficiently trained via the policy optimization. Empirically, we show that our steering function generalizes well to unseen problems. We then plug in the trained policy into the sampling-based and lattice-based planners, and evaluate the resultant POLAMP algorithm (Policy Optimization that Learns Adaptive Motion Primitives) in a range of challenging setups that involve a car-like robot operating in the obstacle-rich parking-lot environments. We show that POLAMP is able to plan collision-free kinodynamic trajectories with success rates higher than 92%, when 50 simultaneously moving obstacles populate the environment showing better performance than the state-of-the-art competitors.
translated by 谷歌翻译
Most deep-learning-based continuous sign language recognition (CSLR) models share a similar backbone consisting of a visual module, a sequential module, and an alignment module. However, due to limited training samples, a connectionist temporal classification loss may not train such CSLR backbones sufficiently. In this work, we propose three auxiliary tasks to enhance the CSLR backbones. The first task enhances the visual module, which is sensitive to the insufficient training problem, from the perspective of consistency. Specifically, since the information of sign languages is mainly included in signers' facial expressions and hand movements, a keypoint-guided spatial attention module is developed to enforce the visual module to focus on informative regions, i.e., spatial attention consistency. Second, noticing that both the output features of the visual and sequential modules represent the same sentence, to better exploit the backbone's power, a sentence embedding consistency constraint is imposed between the visual and sequential modules to enhance the representation power of both features. We name the CSLR model trained with the above auxiliary tasks as consistency-enhanced CSLR, which performs well on signer-dependent datasets in which all signers appear during both training and testing. To make it more robust for the signer-independent setting, a signer removal module based on feature disentanglement is further proposed to remove signer information from the backbone. Extensive ablation studies are conducted to validate the effectiveness of these auxiliary tasks. More remarkably, with a transformer-based backbone, our model achieves state-of-the-art or competitive performance on five benchmarks, PHOENIX-2014, PHOENIX-2014-T, PHOENIX-2014-SI, CSL, and CSL-Daily.
translated by 谷歌翻译
The xView2 competition and xBD dataset spurred significant advancements in overhead building damage detection, but the competition's pixel level scoring can lead to reduced solution performance in areas with tight clusters of buildings or uninformative context. We seek to advance automatic building damage assessment for disaster relief by proposing an auxiliary challenge to the original xView2 competition. This new challenge involves a new dataset and metrics indicating solution performance when damage is more local and limited than in xBD. Our challenge measures a network's ability to identify individual buildings and their damage level without excessive reliance on the buildings' surroundings. Methods that succeed on this challenge will provide more fine-grained, precise damage information than original xView2 solutions. The best-performing xView2 networks' performances dropped noticeably in our new limited/local damage detection task. The common causes of failure observed are that (1) building objects and their classifications are not separated well, and (2) when they are, the classification is strongly biased by surrounding buildings and other damage context. Thus, we release our augmented version of the dataset with additional object-level scoring metrics https://gitlab.kitware.com/dennis.melamed/xfbd to test independence and separability of building objects, alongside the pixel-level performance metrics of the original competition. We also experiment with new baseline models which improve independence and separability of building damage predictions. Our results indicate that building damage detection is not a fully-solved problem, and we invite others to use and build on our dataset augmentations and metrics.
translated by 谷歌翻译
We investigate how humans perform the task of dubbing video content from one language into another, leveraging a novel corpus of 319.57 hours of video from 54 professionally produced titles. This is the first such large-scale study we are aware of. The results challenge a number of assumptions commonly made in both qualitative literature on human dubbing and machine-learning literature on automatic dubbing, arguing for the importance of vocal naturalness and translation quality over commonly emphasized isometric (character length) and lip-sync constraints, and for a more qualified view of the importance of isochronic (timing) constraints. We also find substantial influence of the source-side audio on human dubs through channels other than the words of the translation, pointing to the need for research on ways to preserve speech characteristics, as well as semantic transfer such as emphasis/emotion, in automatic dubbing systems.
translated by 谷歌翻译