鉴于诸如相关风险和道德问题等潜在影响,人工智能(AI)等先进技术的规定变得越来越重要。由于能够首先提供这种技术,安全预防措施和社会后果所承诺的巨大福利可以忽略或换档以换取加快发展,因此在开发人员之间发挥赛车叙事。从一个游戏理论模型开始,描述了一个在一个混合的球员世界的理想化技术比赛中,我们调查了种族参与者之间的不同互动结构如何改变集体选择和对监管行为的要求。我们的研究结果表明,当参与者在连接和同伴影响方面描绘了强大的多样性时(例如,当缔约方之间的无垢网络形状相互作用)时,均匀设置中存在的冲突显着降低,从而减少了对监管的需求行动。此外,我们的结果表明,技术治理和监管可能从公司和国家之间的专利异质性和不平等中获利,以便能够对少数参与者进行细致的干预措施,这能够影响整个人口一种道德和可持续利用先进技术。
translated by 谷歌翻译
我们将仔细研究道德,并尝试以可能成为工具的抽象属性的形式提取见解。我们想将道德与游戏联系起来,谈论道德的表现,将好奇心引入竞争和协调良好的伦理学之间的相互作用,并提供可能统一实体汇总的可能发展的看法。所有这些都是由计算复杂性造成的长阴影,这对游戏来说是负面的。该分析是寻找建模方面的第一步,这些方面可能在AI伦理中用于将现代AI系统整合到人类社会中。
translated by 谷歌翻译
Multi-agent artificial intelligence research promises a path to develop intelligent technologies that are more human-like and more human-compatible than those produced by "solipsistic" approaches, which do not consider interactions between agents. Melting Pot is a research tool developed to facilitate work on multi-agent artificial intelligence, and provides an evaluation protocol that measures generalization to novel social partners in a set of canonical test scenarios. Each scenario pairs a physical environment (a "substrate") with a reference set of co-players (a "background population"), to create a social situation with substantial interdependence between the individuals involved. For instance, some scenarios were inspired by institutional-economics-based accounts of natural resource management and public-good-provision dilemmas. Others were inspired by considerations from evolutionary biology, game theory, and artificial life. Melting Pot aims to cover a maximally diverse set of interdependencies and incentives. It includes the commonly-studied extreme cases of perfectly-competitive (zero-sum) motivations and perfectly-cooperative (shared-reward) motivations, but does not stop with them. As in real-life, a clear majority of scenarios in Melting Pot have mixed incentives. They are neither purely competitive nor purely cooperative and thus demand successful agents be able to navigate the resulting ambiguity. Here we describe Melting Pot 2.0, which revises and expands on Melting Pot. We also introduce support for scenarios with asymmetric roles, and explain how to integrate them into the evaluation protocol. This report also contains: (1) details of all substrates and scenarios; (2) a complete description of all baseline algorithms and results. Our intention is for it to serve as a reference for researchers using Melting Pot 2.0.
translated by 谷歌翻译
\ EMPH {人工智能}(AI)系统越来越多地参与影响我们生活的决策,确保自动决策是公平的,道德已经成为最优先事项。直观地,我们觉得类似人的决定,人工代理人的判断应该必然地以一些道德原则为基础。然而,如果有关决定所基础的所有有关因素的全部信息,可以真正伦理(人类或人为)和公平(根据任何道德理论)和公平(根据公平的任何概念)的规定在决策时。这提出了两个问题:(1)在设置中,我们依赖使用通过监督学习获得的分类器的AI系统,存在一些感应/泛化,即使在学习期间也可能不存在一些相关属性。 (2)根据游戏揭示任何 - 无论是道德的纯策略都不可避免地易于剥削,建模这些决定。此外,在许多游戏中,只能通过使用混合策略来获得纳什均衡,即实现数学上最佳结果,决定必须随机化。在本文中,我们认为,在监督学习设置中,存在至少以及确定性分类器的随机分类器,因此在许多情况下可能是最佳选择。我们支持我们的理论效果,具有一个实证研究,表明对随机人工决策者的积极社会态度,并讨论了与使用与当前的AI政策和标准化举措相关的随机分类器相关的一些政策和实施问题。
translated by 谷歌翻译
Artificial Intelligence (AI) is one of the most transformative technologies of the 21st century. The extent and scope of future AI capabilities remain a key uncertainty, with widespread disagreement on timelines and potential impacts. As nations and technology companies race toward greater complexity and autonomy in AI systems, there are concerns over the extent of integration and oversight of opaque AI decision processes. This is especially true in the subfield of machine learning (ML), where systems learn to optimize objectives without human assistance. Objectives can be imperfectly specified or executed in an unexpected or potentially harmful way. This becomes more concerning as systems increase in power and autonomy, where an abrupt capability jump could result in unexpected shifts in power dynamics or even catastrophic failures. This study presents a hierarchical complex systems framework to model AI risk and provide a template for alternative futures analysis. Survey data were collected from domain experts in the public and private sectors to classify AI impact and likelihood. The results show increased uncertainty over the powerful AI agent scenario, confidence in multiagent environments, and increased concern over AI alignment failures and influence-seeking behavior.
translated by 谷歌翻译
DeepMind的游戏理论与多代理团队研究多学科学习的几个方面,从计算近似值到游戏理论中的基本概念,再到在富裕的空间环境中模拟社会困境,并在困难的团队协调任务中培训3-D类人动物。我们小组的一个签名目的是使用DeepMind在DeepMind中提供的资源和专业知识,以深入强化学习来探索复杂环境中的多代理系统,并使用这些基准来提高我们的理解。在这里,我们总结了我们团队的最新工作,并提出了一种分类法,我们认为这重点介绍了多代理研究中许多重要的开放挑战。
translated by 谷歌翻译
在本章中,我们概述了数据驱动和理论知觉的社交网络复杂模型及其在理解社会不平等和边缘化方面的潜力。我们专注于网络和基于网络的算法以及它们如何影响少数群体引起的不平等现象。特别是,我们研究了同质和混合偏见如何塑造大小社交网络,影响少数民族的感知并影响协作模式。我们还讨论了网络和网络的动态过程以及规范和健康不平等的形成。此外,我们认为网络建模是揭示排名和社会推荐算法对少数群体可见性的影响至关重要的。最后,我们强调了这个新兴研究主题中的主要挑战和未来机会。
translated by 谷歌翻译
人工智能(AI)有可能极大地改善社会,但是与任何强大的技术一样,它的风险和责任也增加。当前的AI研究缺乏有关如何管理AI系统(包括投机性长期风险)的长尾风险的系统讨论。请记住,AI可能是提高人类的长期潜力不可或缺的一部分,人们担心建立更聪明,更强大的AI系统最终可能会导致比我们更强大的系统。有人说这就像玩火,并推测这可能会造成生存风险(X风险)。为了增加这些讨论,我们回顾了来自危害分析和系统安全的时间测试概念的集合,这些概念旨在将大型流程引导到更安全的方向上。然后,我们讨论AI研究人员如何对AI系统的安全产生长期影响。最后,我们讨论如何稳健地塑造将影响安全和一般能力之间平衡的过程。
translated by 谷歌翻译
人机互动和博弈论在相对隔离中,在彼此相对隔离三十年来发展了不同的信任理论。人机互动专注于信任模型的潜在尺寸,层,相关性和前一种,而游戏理论集中在奇异信任决策背后的心理学和策略。这两个领域都努力了解过度信任和信任校准,以及如何衡量信任期望,风险和脆弱性。本文介绍了关闭这些字段之间的差距的初始步骤。使用相互依存理论和社会心理学的见解和实验结果,这项工作开始分析大型游戏理论竞争数据集,以证明各种人类信任交互的最强预测因子是承诺和信任的相互依存导出的变量我们开发了。然后,它提出了对人类主题的第二次研究,以获得更现实的信任情景,涉及人类和人机信任。在竞争数据和我们的实验数据中,我们证明了相互依存的指标更好地捕获了博弈论所提出的理性或规范性心理推理的社会“超级”。这项工作进一步探讨了相互依存的理论 - 以其对承诺,胁迫和合作的关注 - 解决了人机信托内的许多拟议的基础构建和前所,在机器人取代人类时缩小了新的光线的关键相似之处和差异在信任互动中。
translated by 谷歌翻译
有大量且不断增长的证据和文学探索人工智能(AI)技术对整个社会,政治和人类的影响。单独的平行工作已经探索了人类的存在风险,包括但不限于非对齐的人工通用智能(AGI)的风险。在本文中,我们认为当前和近期人工智能技术有可能通过充当中间风险因素来促进存在风险的观念,并且这种潜力不仅限于不规则的AGI场景。我们提出这样的假设,即AI的某些已经记录的影响可以充当存在的风险因素,从而放大了先前确定的存在风险来源的可能性。此外,即使在没有人工通用智能的情况下,未来十年的未来发展也有可能极大地加剧这些危险因素。我们的主要贡献是对潜在的AI风险因素以及它们之间的因果关系的(非排斥)的解释,重点是AI如何影响电力动态和信息安全。该博览会表明,从AI系统到没有假设未来AI能力的存在风险存在因果途径。
translated by 谷歌翻译
Monte Carlo Tree Search (MCTS) is a recently proposed search method that combines the precision of tree search with the generality of random sampling. It has received considerable interest due to its spectacular success in the difficult problem of computer Go, but has also proved beneficial in a range of other domains. This paper is a survey of the literature to date, intended to provide a snapshot of the state of the art after the first five years of MCTS research. We outline the core algorithm's derivation, impart some structure on the many variations and enhancements that have been proposed, and summarise the results from the key game and non-game domains to which MCTS methods have been applied. A number of open research questions indicate that the field is ripe for future work.
translated by 谷歌翻译
我们回顾了有关模型的文献,这些文献试图解释具有金钱回报的正常形式游戏所描述的社交互动中的人类行为。我们首先涵盖社会和道德偏好。然后,我们专注于日益增长的研究,表明人们对描述行动的语言做出反应,尤其是在激活道德问题时。最后,我们认为行为经济学正处于向基于语言的偏好转变的范式中,这将需要探索新的模型和实验设置。
translated by 谷歌翻译
2021年8月,圣达菲研究所举办了一个关于集体智力的研讨会,是智力项目基础的一部分。该项目旨在通过促进智能性质的跨学科研究来推进人工智能领域。该研讨会汇集了计算机科学家,生物学家,哲学家,社会科学家和其他人,以分享他们对多种代理人之间的互动产生的洞察力的见解 - 是否这些代理商是机器,动物或人类。在本报告中,我们总结了每个会谈和随后的讨论。我们还借出了许多关键主题,并确定未来研究的重要前沿。
translated by 谷歌翻译
语言是协调问题的强大解决方案:他们提供了稳定的,有关我们所说的单词如何对应于我们头脑中的信仰和意图的共同期望。然而,在变量和非静止社会环境中的语言使用需要语言表征来灵活:旧词在飞行中获取新的临时或合作伙伴特定含义。在本文中,我们介绍了柴(通过推理的连续分层适应),一个分层贝叶斯的协调理论和会议组织,旨在在这两个基本观察之间调和长期张力。我们认为,沟通的中央计算问题不仅仅是传输,如在经典配方中,而是在多个时间尺度上持续学习和适应。合作伙伴特定的共同点迅速出现在数型互动中的社会推论中,而社群范围内的社会公约是稳定的前锋,这些前锋已经抽象出与多个合作伙伴的互动。我们展示了新的实证数据,展示了我们的模型为多个现象提供了对先前账户挑战的计算基础:(1)与同一合作伙伴的重复互动的更有效的参考表达的融合(2)将合作伙伴特定的共同基础转移到陌生人,并(3)交际范围的影响最终会形成。
translated by 谷歌翻译
We are currently unable to specify human goals and societal values in a way that reliably directs AI behavior. Law-making and legal interpretation form a computational engine that converts opaque human values into legible directives. "Law Informs Code" is the research agenda capturing complex computational legal processes, and embedding them in AI. Similar to how parties to a legal contract cannot foresee every potential contingency of their future relationship, and legislators cannot predict all the circumstances under which their proposed bills will be applied, we cannot ex ante specify rules that provably direct good AI behavior. Legal theory and practice have developed arrays of tools to address these specification problems. For instance, legal standards allow humans to develop shared understandings and adapt them to novel situations. In contrast to more prosaic uses of the law (e.g., as a deterrent of bad behavior through the threat of sanction), leveraged as an expression of how humans communicate their goals, and what society values, Law Informs Code. We describe how data generated by legal processes (methods of law-making, statutory interpretation, contract drafting, applications of legal standards, legal reasoning, etc.) can facilitate the robust specification of inherently vague human goals. This increases human-AI alignment and the local usefulness of AI. Toward society-AI alignment, we present a framework for understanding law as the applied philosophy of multi-agent alignment. Although law is partly a reflection of historically contingent political power - and thus not a perfect aggregation of citizen preferences - if properly parsed, its distillation offers the most legitimate computational comprehension of societal values available. If law eventually informs powerful AI, engaging in the deliberative political process to improve law takes on even more meaning.
translated by 谷歌翻译
The optimal liability framework for AI systems remains an unsolved problem across the globe. In a much-anticipated move, the European Commission advanced two proposals outlining the European approach to AI liability in September 2022: a novel AI Liability Directive and a revision of the Product Liability Directive. They constitute the final, and much-anticipated, cornerstone of AI regulation in the EU. Crucially, the liability proposals and the EU AI Act are inherently intertwined: the latter does not contain any individual rights of affected persons, and the former lack specific, substantive rules on AI development and deployment. Taken together, these acts may well trigger a Brussels effect in AI regulation, with significant consequences for the US and other countries. This paper makes three novel contributions. First, it examines in detail the Commission proposals and shows that, while making steps in the right direction, they ultimately represent a half-hearted approach: if enacted as foreseen, AI liability in the EU will primarily rest on disclosure of evidence mechanisms and a set of narrowly defined presumptions concerning fault, defectiveness and causality. Hence, second, the article suggests amendments, which are collected in an Annex at the end of the paper. Third, based on an analysis of the key risks AI poses, the final part of the paper maps out a road for the future of AI liability and regulation, in the EU and beyond. This includes: a comprehensive framework for AI liability; provisions to support innovation; an extension to non-discrimination/algorithmic fairness, as well as explainable AI; and sustainability. I propose to jump-start sustainable AI regulation via sustainability impact assessments in the AI Act and sustainable design defects in the liability regime. In this way, the law may help spur not only fair AI and XAI, but potentially also sustainable AI (SAI).
translated by 谷歌翻译
机器学习显着增强了机器人的能力,使他们能够在人类环境中执行广泛的任务并适应我们不确定的现实世界。机器学习各个领域的最新作品强调了公平性的重要性,以确保这些算法不会再现人类的偏见并导致歧视性结果。随着机器人学习系统在我们的日常生活中越来越多地执行越来越多的任务,了解这种偏见的影响至关重要,以防止对某些人群的意外行为。在这项工作中,我们从跨学科的角度进行了关于机器人学习公平性的首次调查,该研究跨越了技术,道德和法律挑战。我们提出了偏见来源的分类法和由此产生的歧视类型。使用来自不同机器人学习域的示例,我们研究了不公平结果和减轻策略的场景。我们通过涵盖不同的公平定义,道德和法律考虑以及公平机器人学习的方法来介绍该领域的早期进步。通过这项工作,我们旨在为公平机器人学习中的开创性发展铺平道路。
translated by 谷歌翻译
深度加强学习(RL)的最新进展导致许多2人零和游戏中的相当大的进展,如去,扑克和星际争霸。这种游戏的纯粹对抗性质允许概念上简单地应用R1方法。然而,现实世界的设置是许多代理商,代理交互是复杂的共同利益和竞争方面的混合物。我们认为外交,一个旨在突出由多种代理交互导致的困境的7人棋盘游戏。它还具有大型组合动作空间和同时移动,这对RL算法具有具有挑战性。我们提出了一个简单但有效的近似最佳响应操作员,旨在处理大型组合动作空间并同时移动。我们还介绍了一系列近似虚构游戏的政策迭代方法。通过这些方法,我们成功地将RL申请到外交:我们认为我们的代理商令人信服地令人信服地表明,游戏理论均衡分析表明新过程产生了一致的改进。
translated by 谷歌翻译
虽然道德人工智能(AI)系统的需求增加,但AI加速的不道德使用的数量,即使没有道德准则不足。我们认为这是一个可能的潜在的原因是,AI开发人员在AI发展伦理中面临社会困境,防止了对道德最佳实践的广泛适应。我们为AI开发的社交困境定义了社会困境,并描述了为什么无法解决AI开发道德的当前危机,而无需缓解其社交困境的AI开发人员。我们认为AI开发必须专业为克服社会困境,并讨论如何在此过程中用作模板。
translated by 谷歌翻译
团队是人类成就的核心。在过去的半个世纪中,心理学家已经确定了五个跨文化有效的人格变量:神经质,外向性,开放性,尽职尽责和同意。前四个与团队绩效显示一致的关系。然而,令人愉快的(和谐,无私,谦虚和合作)表现出与团队绩效的无关紧要和高度可变的关系。我们通过计算建模解决这种不一致。基于代理的模型(ABM)用于预测人格特质对团队合作的影响,然后使用遗传算法来探索ABM的限制,以发现哪种特征与最佳和最差的表现相关,以解决与与最差的团队相关的问题,以解决与问题有关的问题。不同级别的不确定性(噪声)。探索所揭示的新依赖性通过分析迄今为止最大的团队绩效数据集的先前未观察到的数据来证实,其中包括593个团队中的3,698个个人,从事5,000多个没有不确定性的小组任务,在10年内收集了不确定性。我们的发现是,团队绩效和同意之间的依赖性受到任务不确定性的调节。以这种方式将进化计算与ABM相结合,为团队合作的科学研究,做出新的预测以及提高我们对人类行为的理解提供了一种新方法。我们的结果证实了计算机建模对发展理论的潜在实用性,并阐明了随着工作环境的越来越流畅和不确定的启示。
translated by 谷歌翻译