基于Shapley值的功能归因在解释机器学习模型中很受欢迎。但是,从理论和计算的角度来看,它们的估计是复杂的。我们将这种复杂性分解为两个因素:(1)〜删除特征信息的方法,以及(2)〜可拖动估计策略。这两个因素提供了一种天然镜头,我们可以更好地理解和比较24种不同的算法。基于各种特征删除方法,我们描述了多种类型的Shapley值特征属性和计算每个类型的方法。然后,基于可进行的估计策略,我们表征了两个不同的方法家族:模型 - 不合时宜的和模型特定的近似值。对于模型 - 不合稳定的近似值,我们基准了广泛的估计方法,并将其与Shapley值的替代性但等效的特征联系起来。对于特定于模型的近似值,我们阐明了对每种方法的线性,树和深模型的障碍至关重要的假设。最后,我们确定了文献中的差距以及有希望的未来研究方向。
translated by 谷歌翻译
The lack of standardization is a prominent issue in magnetic resonance (MR) imaging. This often causes undesired contrast variations due to differences in hardware and acquisition parameters. In recent years, MR harmonization using image synthesis with disentanglement has been proposed to compensate for the undesired contrast variations. Despite the success of existing methods, we argue that three major improvements can be made. First, most existing methods are built upon the assumption that multi-contrast MR images of the same subject share the same anatomy. This assumption is questionable since different MR contrasts are specialized to highlight different anatomical features. Second, these methods often require a fixed set of MR contrasts for training (e.g., both Tw-weighted and T2-weighted images must be available), which limits their applicability. Third, existing methods generally are sensitive to imaging artifacts. In this paper, we present a novel approach, Harmonization with Attention-based Contrast, Anatomy, and Artifact Awareness (HACA3), to address these three issues. We first propose an anatomy fusion module that enables HACA3 to respect the anatomical differences between MR contrasts. HACA3 is also robust to imaging artifacts and can be trained and applied to any set of MR contrasts. Experiments show that HACA3 achieves state-of-the-art performance under multiple image quality metrics. We also demonstrate the applicability of HACA3 on downstream tasks with diverse MR datasets acquired from 21 sites with different field strengths, scanner platforms, and acquisition protocols.
translated by 谷歌翻译
Vision models often fail systematically on groups of data that share common semantic characteristics (e.g., rare objects or unusual scenes), but identifying these failure modes is a challenge. We introduce AdaVision, an interactive process for testing vision models which helps users identify and fix coherent failure modes. Given a natural language description of a coherent group, AdaVision retrieves relevant images from LAION-5B with CLIP. The user then labels a small amount of data for model correctness, which is used in successive retrieval rounds to hill-climb towards high-error regions, refining the group definition. Once a group is saturated, AdaVision uses GPT-3 to suggest new group descriptions for the user to explore. We demonstrate the usefulness and generality of AdaVision in user studies, where users find major bugs in state-of-the-art classification, object detection, and image captioning models. These user-discovered groups have failure rates 2-3x higher than those surfaced by automatic error clustering methods. Finally, finetuning on examples found with AdaVision fixes the discovered bugs when evaluated on unseen examples, without degrading in-distribution accuracy, and while also improving performance on out-of-distribution datasets.
translated by 谷歌翻译
In this paper, we explore the use of metric learning to embed Windows PE files in a low-dimensional vector space for downstream use in a variety of applications, including malware detection, family classification, and malware attribute tagging. Specifically, we enrich labeling on malicious and benign PE files using computationally expensive, disassembly-based malicious capabilities. Using these capabilities, we derive several different types of metric embeddings utilizing an embedding neural network trained via contrastive loss, Spearman rank correlation, and combinations thereof. We then examine performance on a variety of transfer tasks performed on the EMBER and SOREL datasets, demonstrating that for several tasks, low-dimensional, computationally efficient metric embeddings maintain performance with little decay, which offers the potential to quickly retrain for a variety of transfer tasks at significantly reduced storage overhead. We conclude with an examination of practical considerations for the use of our proposed embedding approach, such as robustness to adversarial evasion and introduction of task-specific auxiliary objectives to improve performance on mission critical tasks.
translated by 谷歌翻译
Though semantic segmentation has been heavily explored in vision literature, unique challenges remain in the remote sensing domain. One such challenge is how to handle resolution mismatch between overhead imagery and ground-truth label sources, due to differences in ground sample distance. To illustrate this problem, we introduce a new dataset and use it to showcase weaknesses inherent in existing strategies that naively upsample the target label to match the image resolution. Instead, we present a method that is supervised using low-resolution labels (without upsampling), but takes advantage of an exemplar set of high-resolution labels to guide the learning process. Our method incorporates region aggregation, adversarial learning, and self-supervised pretraining to generate fine-grained predictions, without requiring high-resolution annotations. Extensive experiments demonstrate the real-world applicability of our approach.
translated by 谷歌翻译
Current approaches for fixing systematic problems in NLP models (e.g. regex patches, finetuning on more data) are either brittle, or labor-intensive and liable to shortcuts. In contrast, humans often provide corrections to each other through natural language. Taking inspiration from this, we explore natural language patches -- declarative statements that allow developers to provide corrective feedback at the right level of abstraction, either overriding the model (``if a review gives 2 stars, the sentiment is negative'') or providing additional information the model may lack (``if something is described as the bomb, then it is good''). We model the task of determining if a patch applies separately from the task of integrating patch information, and show that with a small amount of synthetic data, we can teach models to effectively use real patches on real data -- 1 to 7 patches improve accuracy by ~1-4 accuracy points on different slices of a sentiment analysis dataset, and F1 by 7 points on a relation extraction dataset. Finally, we show that finetuning on as many as 100 labeled examples may be needed to match the performance of a small set of language patches.
translated by 谷歌翻译
上印度河盆地喜马拉雅山为2.7亿人和无数的生态系统提供水。然而,在这一领域,降水是水文建模的关键组成部分。围绕这种不确定性的关键挑战来自整个盆地降水的复杂时空分布。在这项工作中,我们提出了具有结构化非平稳核的高斯过程,以模拟UIB中的降水模式。先前试图在印度库什karakoram喜马拉雅地区量化或建模降水的尝试通常是定性的,或者包括在较低分辨率下无法解决的粗略假设和简化。这项研究也几乎没有错误传播。我们用非平稳的Gibbs内核参数为输入依赖性长度尺度来解释降水的空间变化。这允许后函数样品适应印度河地区不同基础地形所固有的不同降水模式。输入依赖的长度尺寸由带有固定平方 - 指数内核的潜在高斯过程控制,以使功能级别的超参数平稳变化。在消融实验中,我们通过证明其对空间协方差,时间结构和关节时空重建的能力来激励所提出的内核的每个组成部分。我们通过固定的高斯工艺和深度高斯工艺进行基准测试模型。
translated by 谷歌翻译
我们考虑在重复的未知游戏中进行规避风险的学习,在这种游戏中,代理商的目标是最大程度地减少其个人产生高成本的风险。具体而言,代理商使用处于风险的条件值(CVAR)作为风险措施,并以每集选定动作的成本值的形式依靠强盗反馈来估算其CVAR值并更新其动作。使用匪徒反馈来估计CVAR的一个主要挑战是,代理只能访问其自身的成本值,但是,这取决于所有代理的行为。为了应对这一挑战,我们提出了一种新的规避风险的学习算法,并利用有关成本价值的完整历史信息。我们表明,该算法实现了子线性的遗憾,并匹配了文献中最著名的算法。我们为欧洲大师游戏提供了数值实验,该游戏表明我们的方法表现优于现有方法。
translated by 谷歌翻译
建筑物中的加热和冷却系统占全球能源使用的31 \%,其中大部分受基于规则的控制器(RBC)调节,这些控制器(RBC)既不通过与电网进行最佳交互来最大化能源效率或最小化排放。通过强化学习(RL)的控制已显示可显着提高建筑能源效率,但是现有的解决方案需要访问世界上每栋建筑物都无法期望的特定建筑模拟器或数据。作为回应,我们表明可以在没有这样的知识的情况下获得减少排放的政策,这是我们称为零射击建筑物控制的范式。我们结合了系统识别和基于模型的RL的想法,以创建PEARL(概率避免发射的增强学习),并表明建立表现模型所需的短期积极探索是所需的。在三个不同的建筑能源模拟的实验中,我们显示珍珠在所有情况下都优于现有的RBC,并且在所有情况下,流行的RL基线,在维持热舒适度的同时,将建筑物排放量减少了31 \%。我们的源代码可通过https://enjeener.io/projects/pearl在线获得。
translated by 谷歌翻译
创新是经济和社会发展的主要驱动力,有关多种创新的信息嵌入了专利和专利申请的半结构化数据中。尽管在专利数据中表达的创新的影响和新颖性很难通过传统手段来衡量,但ML提供了一套有希望的技术来评估新颖性,汇总贡献和嵌入语义。在本文中,我们介绍了Harvard USPTO专利数据集(HUPD),该数据集是2004年至2004年之间提交给美国专利商业办公室(USPTO)的大型,结构化和多用途的英语专利专利申请。 2018年。HUPD拥有超过450万张专利文件,是可比的Coldia的两到三倍。与以前在NLP中提出的专利数据集不同,HUPD包含了专利申请的发明人提交的版本(不是授予专利的最终版本),其中允许我们在第一次使用NLP方法进行申请时研究专利性。它在包含丰富的结构化元数据以及专利申请文本的同时也很新颖:通过提供每个应用程序的元数据及其所有文本字段,数据集使研究人员能够执行一组新的NLP任务,以利用结构性协变量的变异。作为有关HUPD的研究类型的案例研究,我们向NLP社区(即专利决策的二元分类)介绍了一项新任务。我们还显示数据集中提供的结构化元数据使我们能够对此任务进行概念转移的明确研究。最后,我们演示了如何将HUPD用于三个其他任务:专利主题领域的多类分类,语言建模和摘要。
translated by 谷歌翻译