A learned system uses machine learning (ML) internally to improve performance. We can expect such systems to be vulnerable to some adversarial-ML attacks. Often, the learned component is shared between mutually-distrusting users or processes, much like microarchitectural resources such as caches, potentially giving rise to highly-realistic attacker models. However, compared to attacks on other ML-based systems, attackers face a level of indirection as they cannot interact directly with the learned model. Additionally, the difference between the attack surface of learned and non-learned versions of the same system is often subtle. These factors obfuscate the de-facto risks that the incorporation of ML carries. We analyze the root causes of potentially-increased attack surface in learned systems and develop a framework for identifying vulnerabilities that stem from the use of ML. We apply our framework to a broad set of learned systems under active development. To empirically validate the many vulnerabilities surfaced by our framework, we choose 3 of them and implement and evaluate exploits against prominent learned-system instances. We show that the use of ML caused leakage of past queries in a database, enabled a poisoning attack that causes exponential memory blowup in an index structure and crashes it in seconds, and enabled index users to snoop on each others' key distributions by timing queries over their own keys. We find that adversarial ML is a universal threat against learned systems, point to open research gaps in our understanding of learned-systems security, and conclude by discussing mitigations, while noting that data leakage is inherent in systems whose learned component is shared between multiple parties.
translated by 谷歌翻译
从敏感数据中学习时,必须注意确保培训算法解决隐私问题。教师合奏或PATE的规范私人聚合通过通过投票机制汇总(可能分布的)教师模型集合的预测来计算输出标签。该机制增加了噪音,以获得有关教师培训数据的差异隐私保证。在这项工作中,我们观察到这种噪声的使用(使PATE预测随机)可以实现敏感信息的新形式。对于给定的输入,我们的对手利用这种随机性来提取基础教师提交的投票的高保真直方图。从这些直方图中,对手可以学习输入的敏感属性,例如种族,性别或年龄。尽管这次攻击并没有直接违反差异隐私保证,但它显然违反了隐私规范和期望,如果没有插入差异隐私的噪音,就根本不可能。实际上,违反直觉,随着我们添加更多噪音以提供更强的差异隐私,攻击变得更加容易。我们希望这鼓励未来的工作从整体上考虑隐私,而不是将差异隐私视为灵丹妙药。
translated by 谷歌翻译
在联合学习(FL)中,数据不会在联合培训机器学习模型时留下个人设备。相反,这些设备与中央党(例如,公司)共享梯度。因为数据永远不会“离开”个人设备,因此FL作为隐私保留呈现。然而,最近显示这种保护是一个薄的外观,甚至是一种被动攻击者观察梯度可以重建各个用户的数据。在本文中,我们争辩说,事先工作仍然很大程度上低估了FL的脆弱性。这是因为事先努力专门考虑被动攻击者,这些攻击者是诚实但好奇的。相反,我们介绍了一个活跃和不诚实的攻击者,作为中央会,他们能够在用户计算模型渐变之前修改共享模型的权重。我们称之为修改的重量“陷阱重量”。我们的活跃攻击者能够完全恢复用户数据,并在接近零成本时:攻击不需要复杂的优化目标。相反,它利用了模型梯度的固有数据泄漏,并通过恶意改变共享模型的权重来放大这种效果。这些特异性使我们的攻击能够扩展到具有大型迷你批次数据的模型。如果来自现有工作的攻击者需要小时才能恢复单个数据点,我们的方法需要毫秒来捕获完全连接和卷积的深度神经网络的完整百分之批次数据。最后,我们考虑缓解。我们观察到,FL中的差异隐私(DP)的当前实现是有缺陷的,因为它们明确地信任中央会,并在增加DP噪音的关键任务,因此不提供对恶意中央党的保护。我们还考虑其他防御,并解释为什么它们类似地不足。它需要重新设计FL,为用户提供任何有意义的数据隐私。
translated by 谷歌翻译
The widely studied task of Natural Language Inference (NLI) requires a system to recognize whether one piece of text is textually entailed by another, i.e. whether the entirety of its meaning can be inferred from the other. In current NLI datasets and models, textual entailment relations are typically defined on the sentence- or paragraph-level. However, even a simple sentence often contains multiple propositions, i.e. distinct units of meaning conveyed by the sentence. As these propositions can carry different truth values in the context of a given premise, we argue for the need to recognize the textual entailment relation of each proposition in a sentence individually. We propose PropSegmEnt, a corpus of over 35K propositions annotated by expert human raters. Our dataset structure resembles the tasks of (1) segmenting sentences within a document to the set of propositions, and (2) classifying the entailment relation of each proposition with respect to a different yet topically-aligned document, i.e. documents describing the same event or entity. We establish strong baselines for the segmentation and entailment tasks. Through case studies on summary hallucination detection and document-level NLI, we demonstrate that our conceptual framework is potentially useful for understanding and explaining the compositionality of NLI labels.
translated by 谷歌翻译
Large language models (LLMs) have shown impressive results across a variety of tasks while requiring little or no direct supervision. Further, there is mounting evidence that LLMs may have potential in information-seeking scenarios. We believe the ability of an LLM to attribute the text that it generates is likely to be crucial for both system developers and users in this setting. We propose and study Attributed QA as a key first step in the development of attributed LLMs. We develop a reproducable evaluation framework for the task, using human annotations as a gold standard and a correlated automatic metric that we show is suitable for development settings. We describe and benchmark a broad set of architectures for the task. Our contributions give some concrete answers to two key questions (How to measure attribution?, and How well do current state-of-the-art methods perform on attribution?), and give some hints as to how to address a third key question (How to build LLMs with attribution?).
translated by 谷歌翻译
In this paper, we introduce a novel network that generates semantic, instance, and part segmentation using a shared encoder and effectively fuses them to achieve panoptic-part segmentation. Unifying these three segmentation problems allows for mutually improved and consistent representation learning. To fuse the predictions of all three heads efficiently, we introduce a parameter-free joint fusion module that dynamically balances the logits and fuses them to create panoptic-part segmentation. Our method is evaluated on the Cityscapes Panoptic Parts (CPP) and Pascal Panoptic Parts (PPP) datasets. For CPP, the PartPQ of our proposed model with joint fusion surpasses the previous state-of-the-art by 1.6 and 4.7 percentage points for all areas and segments with parts, respectively. On PPP, our joint fusion outperforms a model using the previous top-down merging strategy by 3.3 percentage points in PartPQ and 10.5 percentage points in PartPQ for partitionable classes.
translated by 谷歌翻译
Action recognition models have achieved impressive results by incorporating scene-level annotations, such as objects, their relations, 3D structure, and more. However, obtaining annotations of scene structure for videos requires a significant amount of effort to gather and annotate, making these methods expensive to train. In contrast, synthetic datasets generated by graphics engines provide powerful alternatives for generating scene-level annotations across multiple tasks. In this work, we propose an approach to leverage synthetic scene data for improving video understanding. We present a multi-task prompt learning approach for video transformers, where a shared video transformer backbone is enhanced by a small set of specialized parameters for each task. Specifically, we add a set of ``task prompts'', each corresponding to a different task, and let each prompt predict task-related annotations. This design allows the model to capture information shared among synthetic scene tasks as well as information shared between synthetic scene tasks and a real video downstream task throughout the entire network. We refer to this approach as ``Promptonomy'', since the prompts model a task-related structure. We propose the PromptonomyViT model (PViT), a video transformer that incorporates various types of scene-level information from synthetic data using the ``Promptonomy'' approach. PViT shows strong performance improvements on multiple video understanding tasks and datasets.
translated by 谷歌翻译
Object permanence is the concept that objects do not suddenly disappear in the physical world. Humans understand this concept at young ages and know that another person is still there, even though it is temporarily occluded. Neural networks currently often struggle with this challenge. Thus, we introduce explicit object permanence into two stage detection approaches drawing inspiration from particle filters. At the core, our detector uses the predictions of previous frames as additional proposals for the current one at inference time. Experiments confirm the feedback loop improving detection performance by a up to 10.3 mAP with little computational overhead. Our approach is suited to extend two-stage detectors for stabilized and reliable detections even under heavy occlusion. Additionally, the ability to apply our method without retraining an existing model promises wide application in real-world tasks.
translated by 谷歌翻译
基础模型(FMS)已证明了前所未有的功能,包括零拍学习,高保真数据合成和范围内的概括。但是,正如我们在本文中所显示的那样,FMS在专家任务上的开箱即用表现较差(例如,从语言查询中检索汽车手册技术插图),数据是看不见的,或者属于长尾的数据用于FM预训练的大型数据集的数据分布的一部分。这强调了在此类专家任务上明确评估和芬太尼FMS的必要性,这可以说是在实际现实世界中最重要的任务。在本文中,我们提出了围绕教授FMS了解技术文档的任务,通过学习将其图形插图与相应的语言描述相匹配的任务围绕着了解技术文档的任务。我们的FETA基准重点是公共汽车手册和销售目录手册中的文本对图像和图像到文本检索。 FETA配备了完全自动注释提取的程序(接受后将发布代码),从而使Feta轻松扩展到将来更多的文档类型和应用域。我们的自动注释导致自动性能指标显示,该指标与在人类策划注释中计算的指标一致(也发布)。我们提供多个基线和对FETA的流行FM的分析,从而导致一些有趣的发现,我们认为这对FM社区非常有价值,为现实世界中FMS应用于当前被标准基准的“忽视”的实践专家任务铺平了道路。在常见对象上。
translated by 谷歌翻译
传统的过程挖掘技术将事件数据作为输入,其中每个事件与一个对象完全关联。对象表示过程的实例化。以对象为中心的事件数据包含与表达多个过程相互作用的多个对象关联的事件。由于传统的过程挖掘技术假设与一个对象相关的事件,因此这些技术不能应用于以对象为中心的事件数据。为了使用传统的过程挖掘技术,通过删除所有对象引用,以一种以对象为中心的事件数据来平坦。扁平过程是有损的,导致从扁平数据中提取的不准确的特征。此外,在变平时丢失了以对象事件数据的图形结构。在本文中,我们介绍了一个通用框架,用于从对象事件数据中提取和编码功能。我们在以对象为中心的事件数据上本地计算功能,从而导致准确的度量。此外,我们为这些功能提供了三个编码:基于表格,顺序和图形。尽管表格和顺序编码已在过程挖掘中大量使用,但基于图的编码是一种保留以对象事件数据结构的新技术。我们提供六种用例:为三个编码中的每个编码中的每一个提供可视化和预测用例。我们在预测用例中使用可解释的AI来显示以对象为中心的特征的实用性以及针对预测模型的基于顺序和基于图的编码的结构。
translated by 谷歌翻译