随着AI系统变得更加强大,普遍存在,越来越多的辩论,关于保持他们的行为与人类更广泛的目标和需求保持一致。这种多学科和多利益相关者辩论必须解决许多问题,在这里我们检查了三个。第一个问题是澄清所需的利益相关者可能有利于AI系统的设计者,因为该技术存在实现它们。我们通过使用认知架构的框架使得这种技术主题更可访问。第二个问题是超越分析框架,将有用智能视为奖励最大化。为了支持这一移动,我们定义了几个AI认知架构,将奖励最大化与旨在改善对齐的其他技术元素组合。第三个问题是利益攸关方应如何校准与现代机器学习研究人员的互动。我们考虑机器学习中的时尚如何创造一个叙事的拉动,即技术和政策讨论的参与者应该意识到,因此他们可以弥补它。我们识别几种技术上易行但目前不合时宜的选择,以改善AI对齐。
translated by 谷歌翻译
We are currently unable to specify human goals and societal values in a way that reliably directs AI behavior. Law-making and legal interpretation form a computational engine that converts opaque human values into legible directives. "Law Informs Code" is the research agenda capturing complex computational legal processes, and embedding them in AI. Similar to how parties to a legal contract cannot foresee every potential contingency of their future relationship, and legislators cannot predict all the circumstances under which their proposed bills will be applied, we cannot ex ante specify rules that provably direct good AI behavior. Legal theory and practice have developed arrays of tools to address these specification problems. For instance, legal standards allow humans to develop shared understandings and adapt them to novel situations. In contrast to more prosaic uses of the law (e.g., as a deterrent of bad behavior through the threat of sanction), leveraged as an expression of how humans communicate their goals, and what society values, Law Informs Code. We describe how data generated by legal processes (methods of law-making, statutory interpretation, contract drafting, applications of legal standards, legal reasoning, etc.) can facilitate the robust specification of inherently vague human goals. This increases human-AI alignment and the local usefulness of AI. Toward society-AI alignment, we present a framework for understanding law as the applied philosophy of multi-agent alignment. Although law is partly a reflection of historically contingent political power - and thus not a perfect aggregation of citizen preferences - if properly parsed, its distillation offers the most legitimate computational comprehension of societal values available. If law eventually informs powerful AI, engaging in the deliberative political process to improve law takes on even more meaning.
translated by 谷歌翻译
如果未来的AI系统在新的情况下是可靠的安全性,那么他们将需要纳入指导它们的一般原则,以便强烈地认识到哪些结果和行为将是有害的。这样的原则可能需要得到约束力的监管制度的支持,该法规需要广泛接受的基本原则。它们还应该足够具体用于技术实施。本文从法律中汲取灵感,解释了负面的人权如何履行此类原则的作用,并为国际监管制度以及为未来的AI系统建立技术安全限制的基础。
translated by 谷歌翻译
在这一荟萃术中,我们探索了道德人工智能(AI)设计实施的三个不同角度,包括哲学伦理观点,技术观点和通过政治镜头进行框架。我们的定性研究包括一篇文献综述,该综述通过讨论前面发表的对比度上下,自下而上和混合方法的价值和缺点,突出了这些角度的交叉引用。对该框架的小说贡献是政治角度,该角度构成了人工智能中的道德规范,要么由公司和政府决定,并通过政策或法律(来自顶部)强加于人,或者是人民要求的道德(从底部出现) ,以及自上而下,自下而上和混合技术,即AI在道德构造和考虑到其用户中的发展方式以及对世界的预期和意外后果和长期影响。作为自下而上的应用技术方法和AI伦理原则作为一种实际的自上而下方法,重点是强化学习。这项调查包括现实世界中的案例研究,以基于历史事实,当前的世界环境以及随之而来的现实,就AI的伦理和理论未来的思想实验进行了有关AI伦理和理论未来思想实验的哲学辩论。
translated by 谷歌翻译
人工智能(AI)有可能极大地改善社会,但是与任何强大的技术一样,它的风险和责任也增加。当前的AI研究缺乏有关如何管理AI系统(包括投机性长期风险)的长尾风险的系统讨论。请记住,AI可能是提高人类的长期潜力不可或缺的一部分,人们担心建立更聪明,更强大的AI系统最终可能会导致比我们更强大的系统。有人说这就像玩火,并推测这可能会造成生存风险(X风险)。为了增加这些讨论,我们回顾了来自危害分析和系统安全的时间测试概念的集合,这些概念旨在将大型流程引导到更安全的方向上。然后,我们讨论AI研究人员如何对AI系统的安全产生长期影响。最后,我们讨论如何稳健地塑造将影响安全和一般能力之间平衡的过程。
translated by 谷歌翻译
Advocates of algorithmic techniques like data mining argue that these techniques eliminate human biases from the decision-making process. But an algorithm is only as good as the data it works with. Data is frequently imperfect in ways that allow these algorithms to inherit the prejudices of prior decision makers. In other cases, data may simply reflect the widespread biases that persist in society at large. In still others, data mining can discover surprisingly useful regularities that are really just preexisting patterns of exclusion and inequality. Unthinking reliance on data mining can deny historically disadvantaged and vulnerable groups full participation in society. Worse still, because the resulting discrimination is almost always an unintentional emergent property of the algorithm's use rather than a conscious choice by its programmers, it can be unusually hard to identify the source of the problem or to explain it to a court. This Essay examines these concerns through the lens of American antidiscrimination law-more particularly, through Title
translated by 谷歌翻译
The optimal liability framework for AI systems remains an unsolved problem across the globe. In a much-anticipated move, the European Commission advanced two proposals outlining the European approach to AI liability in September 2022: a novel AI Liability Directive and a revision of the Product Liability Directive. They constitute the final, and much-anticipated, cornerstone of AI regulation in the EU. Crucially, the liability proposals and the EU AI Act are inherently intertwined: the latter does not contain any individual rights of affected persons, and the former lack specific, substantive rules on AI development and deployment. Taken together, these acts may well trigger a Brussels effect in AI regulation, with significant consequences for the US and other countries. This paper makes three novel contributions. First, it examines in detail the Commission proposals and shows that, while making steps in the right direction, they ultimately represent a half-hearted approach: if enacted as foreseen, AI liability in the EU will primarily rest on disclosure of evidence mechanisms and a set of narrowly defined presumptions concerning fault, defectiveness and causality. Hence, second, the article suggests amendments, which are collected in an Annex at the end of the paper. Third, based on an analysis of the key risks AI poses, the final part of the paper maps out a road for the future of AI liability and regulation, in the EU and beyond. This includes: a comprehensive framework for AI liability; provisions to support innovation; an extension to non-discrimination/algorithmic fairness, as well as explainable AI; and sustainability. I propose to jump-start sustainable AI regulation via sustainability impact assessments in the AI Act and sustainable design defects in the liability regime. In this way, the law may help spur not only fair AI and XAI, but potentially also sustainable AI (SAI).
translated by 谷歌翻译
在接下来的几十年中,人工通用情报(AGI)可能会超过人类在各种重要任务下的能力。该报告为为什么如果没有实质性采取行动来阻止它,AGI可能会利用他们的智能来追求目标,而这些目标是从人类的角度出发,可能会带来潜在的灾难性后果。该报告旨在涵盖激励对对齐问题的关注的关键论点,以尽可能简洁,具体和技术上的方式进行对齐问题。我认为,现实的培训过程可能会导致AGIS中未对准的目标,尤其是因为通过强化学习训练的神经网络将学会计划实现一系列目标;通过欺骗性追求未对准的目标获得更多奖励;并以破坏服从的方式概括。就像Cotra(2022)的较早报告中一样,我在参考说明性AGI培训过程中解释了我的主张,然后概述了解决问题的不同方面的可能的研究方向。
translated by 谷歌翻译
We argue that the attempt to build morality into machines is subject to what we call the Interpretation problem, whereby any rule we give the machine is open to infinite interpretation in ways that we might morally disapprove of, and that the interpretation problem in Artificial Intelligence is an illustration of Wittgenstein's general claim that no rule can contain the criteria for its own application. Using games as an example, we attempt to define the structure of normative spaces and argue that any rule-following within a normative space is guided by values that are external to that space and which cannot themselves be represented as rules. In light of this problem, we analyse the types of mistakes an artificial moral agent could make and we make suggestions about how to build morality into machines by getting them to interpret the rules we give in accordance with these external values, through explicit moral reasoning and the presence of structured values, the adjustment of causal power assigned to the agent and interaction with human agents, such that the machine develops a virtuous character and the impact of the interpretation problem is minimised.
translated by 谷歌翻译
Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models.
translated by 谷歌翻译
机器学习(ML)系统的大小迅速增加,正在获取新功能,并且越来越多地部署在高赌注设置中。与其他强大的技术一样,ML的安全应成为主要的研究优先权。为了应对ML的新兴安全挑战,例如由最近的大型模型引入的策略,我们为ML安全提供了新的路线图,并完善了现场需要解决的技术问题。我们为研究提供了四项问题,即危害危险(“鲁棒性”),识别危险(“监测”),转向ML系统(“对齐”),减少部署危险(“外部安全性”)。在整个过程中,我们澄清了每个问题的动机并提供了具体的研究方向。
translated by 谷歌翻译
根据1,870家公司的Rackspace技术的最近调查,总共34%的AI研究和开发项目失败或被遗弃。我们提出了一项新的战略框架,Aistrom,使管理者基于彻底的文献综述,创建一个成功的AI战略。这提供了一种独特而综合的方法,可以通过实施过程中的各种挑战引导经理和牵头开发人员。在Aistrom框架中,我们首先识别顶部N潜在项目(通常为3-5)。对于每个人,彻底分析了七个重点区域。这些领域包括创建一个数据策略,以考虑独特的跨部门机器学习数据要求,安全性和法律要求。然后,Aistrom指导经理思考如何鉴于AI人才稀缺的跨学科人工智能(AI)实施团队。一旦建立了AI团队战略,它需要在组织内,跨部门或作为单独的部门定位。其他考虑因素包括AI作为服务(AIAAS)或外包开发。看着新技术,我们必须考虑偏见,黑匣子模型的合法性等挑战,并保持循环中的人类。接下来,与任何项目一样,我们需要基于价值的关键性能指标(KPI)来跟踪和验证进度。根据公司的风险策略,SWOT分析(优势,劣势,机会和威胁)可以帮助进一步分类入住项目。最后,我们应该确保我们的战略包括持续的雇员的持续教育,以实现采用文化。这种独特综合的框架提供了有价值的,经理和铅开发商的工具。
translated by 谷歌翻译
Artificial intelligence is not only increasingly used in business and administration contexts, but a race for its regulation is also underway, with the EU spearheading the efforts. Contrary to existing literature, this article suggests, however, that the most far-reaching and effective EU rules for AI applications in the digital economy will not be contained in the proposed AI Act - but have just been enacted in the Digital Markets Act. We analyze the impact of the DMA and related EU acts on AI models and their underlying data across four key areas: disclosure requirements; the regulation of AI training data; access rules; and the regime for fair rankings. The paper demonstrates that fairness, in the sense of the DMA, goes beyond traditionally protected categories of non-discrimination law on which scholarship at the intersection of AI and law has so far largely focused on. Rather, we draw on competition law and the FRAND criteria known from intellectual property law to interpret and refine the DMA provisions on fair rankings. Moreover, we show how, based on CJEU jurisprudence, a coherent interpretation of the concept of non-discrimination in both traditional non-discrimination and competition law may be found. The final part sketches specific proposals for a comprehensive framework of transparency, access, and fairness under the DMA and beyond.
translated by 谷歌翻译
人工智能(AI)的价值分配问题询问我们如何确保人造系统的“价值”(即,客观函数)与人类的价值一致。在本文中,我认为语言交流(自然语言)是稳健价值对齐的必要条件。我讨论了这一主张的真相对试图确保AI系统价值一致的研究计划所带来的后果;或者,更谨慎地设计强大的有益或道德人造代理。
translated by 谷歌翻译
\ EMPH {人工智能}(AI)系统越来越多地参与影响我们生活的决策,确保自动决策是公平的,道德已经成为最优先事项。直观地,我们觉得类似人的决定,人工代理人的判断应该必然地以一些道德原则为基础。然而,如果有关决定所基础的所有有关因素的全部信息,可以真正伦理(人类或人为)和公平(根据任何道德理论)和公平(根据公平的任何概念)的规定在决策时。这提出了两个问题:(1)在设置中,我们依赖使用通过监督学习获得的分类器的AI系统,存在一些感应/泛化,即使在学习期间也可能不存在一些相关属性。 (2)根据游戏揭示任何 - 无论是道德的纯策略都不可避免地易于剥削,建模这些决定。此外,在许多游戏中,只能通过使用混合策略来获得纳什均衡,即实现数学上最佳结果,决定必须随机化。在本文中,我们认为,在监督学习设置中,存在至少以及确定性分类器的随机分类器,因此在许多情况下可能是最佳选择。我们支持我们的理论效果,具有一个实证研究,表明对随机人工决策者的积极社会态度,并讨论了与使用与当前的AI政策和标准化举措相关的随机分类器相关的一些政策和实施问题。
translated by 谷歌翻译
即将开发我们呼叫所体现的系统的新一代越来越自主和自学习系统。在将这些系统部署到真实上下文中,我们面临各种工程挑战,因为它以有益的方式协调所体现的系统的行为至关重要,确保他们与我们以人为本的社会价值观的兼容性,并且设计可验证安全可靠的人类-Machine互动。我们正在争辩说,引发系统工程将来自嵌入到体现系统的温室,并确保动态联合的可信度,这种情况意识到的情境意识,意图,探索,探险,不断发展,主要是不可预测的,越来越自主的体现系统在不确定,复杂和不可预测的现实世界环境中。我们还识别了许多迫切性的系统挑战,包括可信赖的体现系统,包括强大而人为的AI,认知架构,不确定性量化,值得信赖的自融化以及持续的分析和保证。
translated by 谷歌翻译
如何将新兴和全面的技术(例如AI)整合到我们社会的结构和运营中是当代政治,科学和公众辩论的问题。它从不同学科中产生了大量的国际学术文献。本文分析了有关人工智能调节(AI)的学术辩论。该系统审查包括在2016年1月1日至2020年12月31日之间发表的73份同行评审期刊文章样本。分析集中于社会风险和危害,监管责任问题以及可能基于风险的政策框架在内和基于原则的方法。主要利益是拟议的监管方法和工具。提出了各种形式的干预措施,例如禁令,批准,标准设定和披露。对所包括论文的评估​​表明该领域的复杂性,这表明其早产和剩余的缺乏清晰度。通过对学术辩论进行结构性分析,我们在经验和概念上均可更好地理解AI和监管的联系以及基本规范性决策。科学建议与拟议的欧洲AI调节的比较说明了调节的特定方法,其优势和缺点。
translated by 谷歌翻译
为了回应对新的基于AI的技术的社会,法律和道德影响的认识,AI和ML少校会议和期刊现在鼓励或要求提交的论文包括道德影响声明并接受道德审查。这一举动引发了关于伦理在AI和数据科学研究中的作用的激烈辩论,有时会变成适得其反的名称和“取消”的威胁。我们认为,更加关注数据科学家的道德教育可能有助于弥合分离数据科学界的意识形态鸿沟。我们将这种深厚的意识形态冲突诊断为原子主义者和整体者之间的一项冲突。除其他事项外,原子主义者认为,事实是并且应该与价值观分开的想法,而整体者认为事实和价值观是并且应该彼此之间的不可分割。我们的目标是鼓励跨学科和减少学科两极分化的目标,我们借鉴了从哲学和法律到社会理论和人文心理学等各种历史来源,以描述每个意识形态的信仰和假设。最后,我们呼吁数据科学界内的原子主义者和整体者在道德分歧期间表现出更大的同理心,并提出四种有针对性的策略,以确保数据科学研究受益社会。
translated by 谷歌翻译
随着各种公开的AI伦理原则的共识,差距仍然可以随时采用设计和开发负责任的AI系统。我们研究了来自澳大利亚国家科学研究机构(CSIRO)的研究人员和工程师的实践和经验,他们参与设计和开发AI系统的一系列目的。半结构化访谈用于检查参与者的做法如何与澳大利亚政府提出的一套高级AI伦理原则涉及并对齐。原则包括:隐私保护和安全,可靠性和安全性,透明度和解释性,公平性,竞争性,责任,人以人为本的价值观和人类,社会与环境福祉。研究了研究人员和工程师的见解以及在原则的实际应用中为它们提供的挑战。最后,提供了一系列组织响应,以支持实施高级AI道德原则。
translated by 谷歌翻译
机器学习(ML)技术在教育方面越来越普遍,从预测学生辍学,到协助大学入学以及促进MOOC的兴起。考虑到这些新颖用途的快速增长,迫切需要调查ML技术如何支持长期以来的教育原则和目标。在这项工作中,我们阐明了这一复杂的景观绘制,以对教育专家的访谈进行定性见解。这些访谈包括对过去十年中著名应用ML会议上发表的ML教育(ML4ED)论文的深入评估。我们的中心研究目标是批判性地研究这些论文的陈述或暗示教育和社会目标如何与他们解决的ML问题保持一致。也就是说,技术问题的提出,目标,方法和解释结果与手头的教育问题保持一致。我们发现,在ML生命周期的两个部分中存在跨学科的差距,并且尤其突出:从教育目标和将预测转换为干预措施的ML问题的提出。我们使用这些见解来提出扩展的ML生命周期,这也可能适用于在其他领域中使用ML。我们的工作加入了越来越多的跨教育和ML研究的荟萃分析研究,以及对ML社会影响的批判性分析。具体而言,它填补了对机器学习的主要技术理解与与学生合作和政策合作的教育研究人员的观点之间的差距。
translated by 谷歌翻译