Reinforcement learning allows machines to learn from their own experience. Nowadays, it is used in safety-critical applications, such as autonomous driving, despite being vulnerable to attacks carefully crafted to either prevent that the reinforcement learning algorithm learns an effective and reliable policy, or to induce the trained agent to make a wrong decision. The literature about the security of reinforcement learning is rapidly growing, and some surveys have been proposed to shed light on this field. However, their categorizations are insufficient for choosing an appropriate defense given the kind of system at hand. In our survey, we do not only overcome this limitation by considering a different perspective, but we also discuss the applicability of state-of-the-art attacks and defenses when reinforcement learning algorithms are used in the context of autonomous driving.
translated by 谷歌翻译
尽管在机器学习安全方面进行了大量的学术工作,但对野外机器学习系统的攻击的发生知之甚少。在本文中,我们报告了139名工业从业人员的定量研究。我们分析攻击发生和关注,并评估影响影响威胁感知和暴露的因素的统计假设。我们的结果阐明了对部署的机器学习的现实攻击。在组织层面上,尽管我们没有发现样本中威胁暴露的预测因素,但实施防御量取决于暴露于威胁或预期的可能性成为目标的可能性。我们还提供了从业人员对单个机器学习攻击的相关性的答复,揭示了不可靠的决策,业务信息泄漏和偏见引入模型等复杂问题。最后,我们发现,在个人层面上,有关机器学习安全性的先验知识会影响威胁感知。我们的工作为在实践中的对抗机器学习方面进行更多研究铺平了道路,但收益率也可以洞悉监管和审计。
translated by 谷歌翻译
计算能力和大型培训数据集的可用性增加,机器学习的成功助长了。假设它充分代表了在测试时遇到的数据,则使用培训数据来学习新模型或更新现有模型。这种假设受到中毒威胁的挑战,这种攻击会操纵训练数据,以损害模型在测试时的表现。尽管中毒已被认为是行业应用中的相关威胁,到目前为止,已经提出了各种不同的攻击和防御措施,但对该领域的完整系统化和批判性审查仍然缺失。在这项调查中,我们在机器学习中提供了中毒攻击和防御措施的全面系统化,审查了过去15年中该领域发表的100多篇论文。我们首先对当前的威胁模型和攻击进行分类,然后相应地组织现有防御。虽然我们主要关注计算机视觉应用程序,但我们认为我们的系统化还包括其他数据模式的最新攻击和防御。最后,我们讨论了中毒研究的现有资源,并阐明了当前的局限性和该研究领域的开放研究问题。
translated by 谷歌翻译
后门攻击在训练期间注入中毒样本,目的是迫使机器学习模型在测试时间呈现特定触发时输出攻击者所选的类。虽然在各种环境中展示了后门攻击和针对不同的模型,但影响其有效性的因素仍然不太了解。在这项工作中,我们提供了一个统一的框架,以研究增量学习和影响功能的镜头下的后门学习过程。我们表明,后门攻击的有效性取决于:(i)由普通参数控制的学习算法的复杂性; (ii)注入训练集的后门样品的一部分; (iii)后门触发的大小和可见性。这些因素会影响模型学会与目标类别相关联的速度触发器的存在的速度。我们的分析推出了封路计空间中的区域的有趣存在,其中清洁试验样品的准确性仍然很高,而后门攻击无效,从而提示改善现有防御的新标准。
translated by 谷歌翻译
尽管机器学习在实践中被广泛使用,但对从业者对潜在安全挑战的理解知之甚少。在这项工作中,我们缩小了这一巨大的差距,并贡献了一项定性研究,重点是开发人员的机器学习管道和潜在脆弱组件的心理模型。类似的研究在其他安全领域有助于发现根本原因或改善风险交流。我们的研究揭示了从业人员的机器学习安全性心理模型的两个方面。首先,从业人员通常将机器学习安全与与机器学习无直接相关的威胁和防御措施混淆。其次,与大多数学术研究相反,我们的参与者认为机器学习的安全性与单个模型不仅相关,而在整个工作流程中,由多个组件组成。与我们的其他发现共同,这两个方面为确定机器学习安全性的心理模型提供了基础学习安全。
translated by 谷歌翻译
后门攻击误导机器学习模型以在测试时间呈现特定触发时输出攻击者指定的类。这些攻击需要毒害训练数据来损害学习算法,例如,通过将包含触发器的中毒样本注入训练集中,以及所需的类标签。尽管对后门攻击和防御的研究数量越来越多,但影响后门攻击成功的潜在因素以及它们对学习算法的影响尚未得到很好的理解。在这项工作中,我们的目标是通过揭幕揭示触发样本周围的更光滑的决策功能来阐明这一问题 - 这是我们称之为\ Textit {后门平滑}的现象。为了量化后门平滑,我们定义了一种评估与输入样本周围分类器的预测相关的不确定性的度量。我们的实验表明,当触发器添加到输入样本时,平滑度会增加,并且这种现象更加明显,以获得更成功的攻击。我们还提供了初步证据,后者触发器不是唯一的平滑诱导模式,而是可以通过我们的方法来检测其他人工图案,铺平了解当前防御和设计新颖的局限性。
translated by 谷歌翻译
Many problems in machine learning involve bilevel optimization (BLO), including hyperparameter optimization, meta-learning, and dataset distillation. Bilevel problems consist of two nested sub-problems, called the outer and inner problems, respectively. In practice, often at least one of these sub-problems is overparameterized. In this case, there are many ways to choose among optima that achieve equivalent objective values. Inspired by recent studies of the implicit bias induced by optimization algorithms in single-level optimization, we investigate the implicit bias of gradient-based algorithms for bilevel optimization. We delineate two standard BLO methods -- cold-start and warm-start -- and show that the converged solution or long-run behavior depends to a large degree on these and other algorithmic choices, such as the hypergradient approximation. We also show that the inner solutions obtained by warm-start BLO can encode a surprising amount of information about the outer objective, even when the outer parameters are low-dimensional. We believe that implicit bias deserves as central a role in the study of bilevel optimization as it has attained in the study of single-level neural net optimization.
translated by 谷歌翻译
As language models (LMs) scale, they develop many novel behaviors, good and bad, exacerbating the need to evaluate how they behave. Prior work creates evaluations with crowdwork (which is time-consuming and expensive) or existing data sources (which are not always available). Here, we automatically generate evaluations with LMs. We explore approaches with varying amounts of human effort, from instructing LMs to write yes/no questions to making complex Winogender schemas with multiple stages of LM-based generation and filtering. Crowdworkers rate the examples as highly relevant and agree with 90-100% of labels, sometimes more so than corresponding human-written datasets. We generate 154 datasets and discover new cases of inverse scaling where LMs get worse with size. Larger LMs repeat back a dialog user's preferred answer ("sycophancy") and express greater desire to pursue concerning goals like resource acquisition and goal preservation. We also find some of the first examples of inverse scaling in RL from Human Feedback (RLHF), where more RLHF makes LMs worse. For example, RLHF makes LMs express stronger political views (on gun rights and immigration) and a greater desire to avoid shut down. Overall, LM-written evaluations are high-quality and let us quickly discover many novel LM behaviors.
translated by 谷歌翻译
Variational autoencoders (VAEs) are powerful tools for learning latent representations of data used in a wide range of applications. In practice, VAEs usually require multiple training rounds to choose the amount of information the latent variable should retain. This trade-off between the reconstruction error (distortion) and the KL divergence (rate) is typically parameterized by a hyperparameter $\beta$. In this paper, we introduce Multi-Rate VAE (MR-VAE), a computationally efficient framework for learning optimal parameters corresponding to various $\beta$ in a single training run. The key idea is to explicitly formulate a response function that maps $\beta$ to the optimal parameters using hypernetworks. MR-VAEs construct a compact response hypernetwork where the pre-activations are conditionally gated based on $\beta$. We justify the proposed architecture by analyzing linear VAEs and showing that it can represent response functions exactly for linear VAEs. With the learned hypernetwork, MR-VAEs can construct the rate-distortion curve without additional training and can be deployed with significantly less hyperparameter tuning. Empirically, our approach is competitive and often exceeds the performance of multiple $\beta$-VAEs training with minimal computation and memory overheads.
translated by 谷歌翻译
We present a toolchain for solving path planning problems for concentric tube robots through obstacle fields. First, ellipsoidal sets representing the target area and obstacles are constructed from labelled point clouds. Then, the nonlinear and highly nonconvex optimal control problem is solved by introducing a homotopy on the obstacle positions where at one extreme of the parameter the obstacles are removed from the operating space, and at the other extreme they are located at their intended positions. We present a detailed example (with more than a thousand obstacles) from stereotactic neurosurgery with real-world data obtained from labelled MPRI scans.
translated by 谷歌翻译