We argue that the attempt to build morality into machines is subject to what we call the Interpretation problem, whereby any rule we give the machine is open to infinite interpretation in ways that we might morally disapprove of, and that the interpretation problem in Artificial Intelligence is an illustration of Wittgenstein's general claim that no rule can contain the criteria for its own application. Using games as an example, we attempt to define the structure of normative spaces and argue that any rule-following within a normative space is guided by values that are external to that space and which cannot themselves be represented as rules. In light of this problem, we analyse the types of mistakes an artificial moral agent could make and we make suggestions about how to build morality into machines by getting them to interpret the rules we give in accordance with these external values, through explicit moral reasoning and the presence of structured values, the adjustment of causal power assigned to the agent and interaction with human agents, such that the machine develops a virtuous character and the impact of the interpretation problem is minimised.
translated by 谷歌翻译
如果未来的AI系统在新的情况下是可靠的安全性,那么他们将需要纳入指导它们的一般原则,以便强烈地认识到哪些结果和行为将是有害的。这样的原则可能需要得到约束力的监管制度的支持,该法规需要广泛接受的基本原则。它们还应该足够具体用于技术实施。本文从法律中汲取灵感,解释了负面的人权如何履行此类原则的作用,并为国际监管制度以及为未来的AI系统建立技术安全限制的基础。
translated by 谷歌翻译
人工智能(AI)的价值分配问题询问我们如何确保人造系统的“价值”(即,客观函数)与人类的价值一致。在本文中,我认为语言交流(自然语言)是稳健价值对齐的必要条件。我讨论了这一主张的真相对试图确保AI系统价值一致的研究计划所带来的后果;或者,更谨慎地设计强大的有益或道德人造代理。
translated by 谷歌翻译
由于需要确保安全可靠的人工智能(AI),因此在过去几年中,机器伦理学受到了越来越多的关注。这两种在机器伦理中使用的主要理论是道义和功利主义伦理。另一方面,美德伦理经常被称为另一种伦理理论。尽管这种有趣的方法比流行的道德理论具有一定的优势,但由于其形式化,编纂和解决道德困境以训练良性剂的挑战,工程人工贤惠的媒介几乎没有努力。我们建议通过使用充满道德困境的角色扮演游戏来弥合这一差距。有几种这样的游戏,例如论文,生活很奇怪,主要角色遇到的情况必须通过放弃对他们所珍视的其他东西来选择正确的行动方案。我们从此类游戏中汲取灵感,以展示如何设计系统的角色扮演游戏来发展人造代理中的美德。使用现代的AI技术,例如基于亲和力的强化学习和可解释的AI,我们激励了扮演这种角色扮演游戏的良性代理,以及通过美德道德镜头对他们的决策进行检查。这种代理和环境的发展是朝着实际上正式化和证明美德伦理在伦理代理发展的价值的第一步。
translated by 谷歌翻译
在这一荟萃术中,我们探索了道德人工智能(AI)设计实施的三个不同角度,包括哲学伦理观点,技术观点和通过政治镜头进行框架。我们的定性研究包括一篇文献综述,该综述通过讨论前面发表的对比度上下,自下而上和混合方法的价值和缺点,突出了这些角度的交叉引用。对该框架的小说贡献是政治角度,该角度构成了人工智能中的道德规范,要么由公司和政府决定,并通过政策或法律(来自顶部)强加于人,或者是人民要求的道德(从底部出现) ,以及自上而下,自下而上和混合技术,即AI在道德构造和考虑到其用户中的发展方式以及对世界的预期和意外后果和长期影响。作为自下而上的应用技术方法和AI伦理原则作为一种实际的自上而下方法,重点是强化学习。这项调查包括现实世界中的案例研究,以基于历史事实,当前的世界环境以及随之而来的现实,就AI的伦理和理论未来的思想实验进行了有关AI伦理和理论未来思想实验的哲学辩论。
translated by 谷歌翻译
We are currently unable to specify human goals and societal values in a way that reliably directs AI behavior. Law-making and legal interpretation form a computational engine that converts opaque human values into legible directives. "Law Informs Code" is the research agenda capturing complex computational legal processes, and embedding them in AI. Similar to how parties to a legal contract cannot foresee every potential contingency of their future relationship, and legislators cannot predict all the circumstances under which their proposed bills will be applied, we cannot ex ante specify rules that provably direct good AI behavior. Legal theory and practice have developed arrays of tools to address these specification problems. For instance, legal standards allow humans to develop shared understandings and adapt them to novel situations. In contrast to more prosaic uses of the law (e.g., as a deterrent of bad behavior through the threat of sanction), leveraged as an expression of how humans communicate their goals, and what society values, Law Informs Code. We describe how data generated by legal processes (methods of law-making, statutory interpretation, contract drafting, applications of legal standards, legal reasoning, etc.) can facilitate the robust specification of inherently vague human goals. This increases human-AI alignment and the local usefulness of AI. Toward society-AI alignment, we present a framework for understanding law as the applied philosophy of multi-agent alignment. Although law is partly a reflection of historically contingent political power - and thus not a perfect aggregation of citizen preferences - if properly parsed, its distillation offers the most legitimate computational comprehension of societal values available. If law eventually informs powerful AI, engaging in the deliberative political process to improve law takes on even more meaning.
translated by 谷歌翻译
\ EMPH {人工智能}(AI)系统越来越多地参与影响我们生活的决策,确保自动决策是公平的,道德已经成为最优先事项。直观地,我们觉得类似人的决定,人工代理人的判断应该必然地以一些道德原则为基础。然而,如果有关决定所基础的所有有关因素的全部信息,可以真正伦理(人类或人为)和公平(根据任何道德理论)和公平(根据公平的任何概念)的规定在决策时。这提出了两个问题:(1)在设置中,我们依赖使用通过监督学习获得的分类器的AI系统,存在一些感应/泛化,即使在学习期间也可能不存在一些相关属性。 (2)根据游戏揭示任何 - 无论是道德的纯策略都不可避免地易于剥削,建模这些决定。此外,在许多游戏中,只能通过使用混合策略来获得纳什均衡,即实现数学上最佳结果,决定必须随机化。在本文中,我们认为,在监督学习设置中,存在至少以及确定性分类器的随机分类器,因此在许多情况下可能是最佳选择。我们支持我们的理论效果,具有一个实证研究,表明对随机人工决策者的积极社会态度,并讨论了与使用与当前的AI政策和标准化举措相关的随机分类器相关的一些政策和实施问题。
translated by 谷歌翻译
一个自治系统由制造商构建,在患有规范和法律的社会中运营,并与最终用户进行互动。所有这些行动者都是受自治系统行为影响的利益相关者。我们解决这些利益攸关方的道德观点的挑战可以集成在自治系统的行为中。我们提出了一个道德推荐组件,我们称之为JIMINY,它使用规范系统和正式论证的技术,以达到利益攸关方之间的道德协议。 JIMINY代表了使用规范系统的每个利益相关者的道德观点,并有三种解决涉及利益攸关方意见的道德困境。首先,JIMINY认为利益相关者的论据是如何彼此相关的,这可能已经解决了困境。其次,JIMINY结合了利益攸关方的规范性系统,使利益攸关方的合并专业知识可能解决困境。第三,只有当这两种其他方法失败时,JIMINY使用上下文敏感的规则来决定哪个利益相关者优先考虑。在抽象层面,这三种方法的特点是添加参数,参数之间的攻击以及争论之间的攻击。我们展示了JIMINY不仅可以用于道德推理和协作决策,而且还用于提供关于道德行为的解释。
translated by 谷歌翻译
There has been a recent resurgence in the area of explainable artificial intelligence as researchers and practitioners seek to make their algorithms more understandable. Much of this research is focused on explicitly explaining decisions or actions to a human observer, and it should not be controversial to say that looking at how humans explain to each other can serve as a useful starting point for explanation in artificial intelligence. However, it is fair to say that most work in explainable artificial intelligence uses only the researchers' intuition of what constitutes a 'good' explanation. There exists vast and valuable bodies of research in philosophy, psychology, and cognitive science of how people define, generate, select, evaluate, and present explanations, which argues that people employ certain cognitive biases and social expectations towards the explanation process. This paper argues that the field of explainable artificial intelligence should build on this existing research, and reviews relevant papers from philosophy, cognitive psychology/science, and social psychology, which study these topics. It draws out some important findings, and discusses ways that these can be infused with work on explainable artificial intelligence.
translated by 谷歌翻译
Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models.
translated by 谷歌翻译
我们将仔细研究道德,并尝试以可能成为工具的抽象属性的形式提取见解。我们想将道德与游戏联系起来,谈论道德的表现,将好奇心引入竞争和协调良好的伦理学之间的相互作用,并提供可能统一实体汇总的可能发展的看法。所有这些都是由计算复杂性造成的长阴影,这对游戏来说是负面的。该分析是寻找建模方面的第一步,这些方面可能在AI伦理中用于将现代AI系统整合到人类社会中。
translated by 谷歌翻译
人工智能(AI)有可能极大地改善社会,但是与任何强大的技术一样,它的风险和责任也增加。当前的AI研究缺乏有关如何管理AI系统(包括投机性长期风险)的长尾风险的系统讨论。请记住,AI可能是提高人类的长期潜力不可或缺的一部分,人们担心建立更聪明,更强大的AI系统最终可能会导致比我们更强大的系统。有人说这就像玩火,并推测这可能会造成生存风险(X风险)。为了增加这些讨论,我们回顾了来自危害分析和系统安全的时间测试概念的集合,这些概念旨在将大型流程引导到更安全的方向上。然后,我们讨论AI研究人员如何对AI系统的安全产生长期影响。最后,我们讨论如何稳健地塑造将影响安全和一般能力之间平衡的过程。
translated by 谷歌翻译
This volume contains revised versions of the papers selected for the third volume of the Online Handbook of Argumentation for AI (OHAAI). Previously, formal theories of argument and argument interaction have been proposed and studied, and this has led to the more recent study of computational models of argument. Argumentation, as a field within artificial intelligence (AI), is highly relevant for researchers interested in symbolic representations of knowledge and defeasible reasoning. The purpose of this handbook is to provide an open access and curated anthology for the argumentation research community. OHAAI is designed to serve as a research hub to keep track of the latest and upcoming PhD-driven research on the theory and application of argumentation in all areas related to AI.
translated by 谷歌翻译
多价值动作推理系统(MARS)是一种基于自动价值的人工代理(AI)的道德决策模型。鉴于一组可用的行动和基本的道德范式,通过使用火星,可以确定具有道德上首选的行动。它可用于实施和建模不同的道德理论,不同的道德范例,以及在自动实践推理和规范决策分析的背景下的组合。它也可以用来模拟道德困境,并发现导致所需结果的道德范式。在本文中,我们对火星进行了凝结的描述,解释其用途,并将其置于现有文献中。
translated by 谷歌翻译
随着AI系统变得更加强大,普遍存在,越来越多的辩论,关于保持他们的行为与人类更广泛的目标和需求保持一致。这种多学科和多利益相关者辩论必须解决许多问题,在这里我们检查了三个。第一个问题是澄清所需的利益相关者可能有利于AI系统的设计者,因为该技术存在实现它们。我们通过使用认知架构的框架使得这种技术主题更可访问。第二个问题是超越分析框架,将有用智能视为奖励最大化。为了支持这一移动,我们定义了几个AI认知架构,将奖励最大化与旨在改善对齐的其他技术元素组合。第三个问题是利益攸关方应如何校准与现代机器学习研究人员的互动。我们考虑机器学习中的时尚如何创造一个叙事的拉动,即技术和政策讨论的参与者应该意识到,因此他们可以弥补它。我们识别几种技术上易行但目前不合时宜的选择,以改善AI对齐。
translated by 谷歌翻译
事实证明,在学习环境中,社会智能代理(SIA)的部署在不同的应用领域具有多个优势。社会代理创作工具使场景设计师能够创造出对SIAS行为的高度控制的量身定制体验,但是,另一方面,这是有代价的,因为该方案及其创作的复杂性可能变得霸道。在本文中,我们介绍了可解释的社会代理创作工具的概念,目的是分析社会代理的创作工具是否可以理解和解释。为此,我们检查了创作工具Fatima-Toolkit是否可以理解,并且从作者的角度来看,其创作步骤可以解释。我们进行了两项用户研究,以定量评估Fatima-Toolkit的解释性,可理解性和透明度,从场景设计师的角度来看。关键发现之一是,法蒂玛 - 库尔基特(Fatima-Toolkit)的概念模型通常是可以理解的,但是基于情感的概念并不那么容易理解和使用。尽管关于Fatima-Toolkit的解释性有一些积极的方面,但仍需要取得进展,以实现完全可以解释的社会代理商创作工具。我们提供一组关键概念和可能的解决方案,可以指导开发人员构建此类工具。
translated by 谷歌翻译
Advocates of algorithmic techniques like data mining argue that these techniques eliminate human biases from the decision-making process. But an algorithm is only as good as the data it works with. Data is frequently imperfect in ways that allow these algorithms to inherit the prejudices of prior decision makers. In other cases, data may simply reflect the widespread biases that persist in society at large. In still others, data mining can discover surprisingly useful regularities that are really just preexisting patterns of exclusion and inequality. Unthinking reliance on data mining can deny historically disadvantaged and vulnerable groups full participation in society. Worse still, because the resulting discrimination is almost always an unintentional emergent property of the algorithm's use rather than a conscious choice by its programmers, it can be unusually hard to identify the source of the problem or to explain it to a court. This Essay examines these concerns through the lens of American antidiscrimination law-more particularly, through Title
translated by 谷歌翻译
现有的制定公平计算定义的努力主要集中在平等的分布概念上,在这种情况下,平等是由系统中给出的资源或决策定义的。然而,现有的歧视和不公正通常是社会关系不平等的结果,而不是资源分配不平等。在这里,我们展示了对公平和平等的现有计算和经济定义的优化,无法防止不平等的社会关系。为此,我们提供了一个在简单的招聘市场中具有自我融合平衡的示例,该市场在关系上不平等,但满足了现有的公平分布概念。在此过程中,我们引入了公然的关系不公平的概念,对完整信息游戏进行了讨论,并讨论了该定义如何有助于启动一种将关系平等纳入计算系统的新方法。
translated by 谷歌翻译
最近围绕语言处理模型的复杂性的最新炒作使人们对机器获得了类似人类自然语言的指挥的乐观情绪。人工智能中自然语言理解的领域声称在这一领域取得了长足的进步,但是,在这方面和其他学科中使用“理解”的概念性清晰,使我们很难辨别我们实际上有多近的距离。目前的方法和剩余挑战的全面,跨学科的概述尚待进行。除了语言知识之外,这还需要考虑我们特定于物种的能力,以对,记忆,标签和传达我们(足够相似的)体现和位置经验。此外,测量实际约束需要严格分析当前模型的技术能力,以及对理论可能性和局限性的更深入的哲学反思。在本文中,我将所有这些观点(哲学,认知语言和技术)团结在一起,以揭开达到真实(人类般的)语言理解所涉及的挑战。通过解开当前方法固有的理论假设,我希望说明我们距离实现这一目标的实际程度,如果确实是目标。
translated by 谷歌翻译
我们回顾了有关模型的文献,这些文献试图解释具有金钱回报的正常形式游戏所描述的社交互动中的人类行为。我们首先涵盖社会和道德偏好。然后,我们专注于日益增长的研究,表明人们对描述行动的语言做出反应,尤其是在激活道德问题时。最后,我们认为行为经济学正处于向基于语言的偏好转变的范式中,这将需要探索新的模型和实验设置。
translated by 谷歌翻译