我们调查在发电解释时包括域名知识对机器人系统的因果关系的影响。为此,我们比较了两个方法,可解释的人工智能,流行的Kernelshap和最近的因果形状,在使用深度加强学习使用机器人操纵器控制杠杆的任务训练的深度神经网络上。 Kernelshap的主要缺点是其解释仅代表了对模型输出的功能的直接影响,而不是考虑到通过影响其他特征来对输出具有输出的间接效果。因果形状使用部分因果关系来改变Kernelshap的采样过程来包含这些间接效应。这种部分因果关系定义了功能之间的因果关系,并且我们使用关于杠杆控制任务的域知识来指定此问题。我们展示了解释方法,以解释间接效应并纳入一些域知识可以导致更好地与人类直觉同意的解释。这对真实世界的机器人专业任务特别有利,在游戏中存在相当大的因果关系,此外,所需的域知识通常可以充分利用。
translated by 谷歌翻译
自2015年首次介绍以来,深入增强学习(DRL)方案的使用已大大增加。尽管在许多不同的应用中使用了使用,但他们仍然存在缺乏可解释性的问题。面包缺乏对研究人员和公众使用DRL解决方案的使用。为了解决这个问题,已经出现了可解释的人工智能(XAI)领域。这是各种不同的方法,它们希望打开DRL黑框,范围从使用可解释的符号决策树到诸如Shapley值之类的数值方法。这篇评论研究了使用哪些方法以及使用了哪些应用程序。这样做是为了确定哪些模型最适合每个应用程序,或者是否未充分利用方法。
translated by 谷歌翻译
近年来,可解释的人工智能(XAI)研究因对用户社区对AI的更高透明度和信任的需求而获得了突出性。这尤其重要,因为AI在金融,医学等敏感领域采用,在这种敏感领域,对社会,道德和安全的影响是巨大的。经过彻底的系统评估,XAI的工作主要集中于机器学习(ML)进行分类,决策或行动。据我们所知,没有任何据报道提供可解释的加固学习(XRL)方法来交易金融股票的方法。在本文中,我们提议在流行的深层增强学习体系结构,深Q网络(DQN)上采用Shapley添加说明(SHAP),以解释代理商在给定实例中在金融股票交易中的行动。为了证明我们方法的有效性,我们在两个流行的数据集(即Sensex和DJIA)上对其进行了测试,并报告了结果。
translated by 谷歌翻译
Understanding why a model makes a certain prediction can be as crucial as the prediction's accuracy in many applications. However, the highest accuracy for large modern datasets is often achieved by complex models that even experts struggle to interpret, such as ensemble or deep learning models, creating a tension between accuracy and interpretability. In response, various methods have recently been proposed to help users interpret the predictions of complex models, but it is often unclear how these methods are related and when one method is preferable over another. To address this problem, we present a unified framework for interpreting predictions, SHAP (SHapley Additive exPlanations). SHAP assigns each feature an importance value for a particular prediction. Its novel components include: (1) the identification of a new class of additive feature importance measures, and (2) theoretical results showing there is a unique solution in this class with a set of desirable properties. The new class unifies six existing methods, notable because several recent methods in the class lack the proposed desirable properties. Based on insights from this unification, we present new methods that show improved computational performance and/or better consistency with human intuition than previous approaches.
translated by 谷歌翻译
在公共场合开展业务的未受保护的未受保护的无飞机特工(UAV)的对抗性攻击的危险正在增长。采用基于AI的技术和更具体的深度学习(DL)方法来控制和指导这些无人机可能在性能方面有益,但对这些技术的安全性及其对对抗性攻击的脆弱性增加了更多的担忧,从而导致碰撞的机会增加随着代理人变得困惑。本文提出了一种基于DL方法的解释性来建立有效检测器的创新方法,该方法将保护这些DL方案,从而使它们采用它们免受潜在攻击。代理商正在采用深入的强化学习(DRL)计划进行指导和计划。它是由深层确定性政策梯度(DDPG)组成和培训的,并具有优先的经验重播(PER)DRL计划,该计划利用人工潜在领域(APF)来改善训练时间和避免障碍的绩效。对抗性攻击是通过快速梯度标志方法(FGSM)和基本迭代方法(BIM)算法产生的,并将障碍物课程的完成率从80 \%降低至35 \%。建立了无人机基于无人体DRL的计划和指导的现实合成环境,包括障碍和对抗性攻击。提出了两个对抗攻击探测器。第一个采用卷积神经网络(CNN)体系结构,并实现了80 \%的检测准确性。第二个检测器是根据长期记忆(LSTM)网络开发的,与基于CNN的检测器相比,计算时间更快地达到了91 \%的精度。
translated by 谷歌翻译
围绕深度学习算法的长期挑战是解开和了解它们如何做出决定。可解释的人工智能(XAI)提供了方法,以解释算法的内部功能及其决策背后的原因,这些方式以人类用户的解释和可理解的方式提供了解释。 。到目前为止,已经开发了许多XAI方法,并且对这些策略进行比较分析似乎是为了辨别它们与临床预测模型的相关性。为此,我们首先实施了两个使用结构化表格和时间序列生理数据的创伤性脑损伤(TBI)(TBI)的预测模型。使用六种不同的解释技术来描述本地和全球水平的预测模型。然后,我们对每种策略的优点和缺点进行了批判性分析,突出了对使用这些方法感兴趣的研究人员的影响。根据几种XAI特征,例如可理解性,忠诚度和稳定性,将实施的方法相互比较。我们的发现表明,Shap是最稳定的,其保真度最高,但缺乏可理解性。另一方面,锚是最可理解的方法,但仅适用于表格数据而不是时间序列数据。
translated by 谷歌翻译
与经典的统计学习方法相比,机器和深度学习生存模型表现出相似甚至改进事件的预测能力,但太复杂了,无法被人类解释。有几种模型不合时宜的解释可以克服这个问题。但是,没有一个直接解释生存函数预测。在本文中,我们介绍了Survhap(t),这是第一个允许解释生存黑盒模型的解释。它基于Shapley添加性解释,其理论基础稳定,并在机器学习从业人员中广泛采用。拟议的方法旨在增强精确诊断和支持领域的专家做出决策。关于合成和医学数据的实验证实,survhap(t)可以检测具有时间依赖性效果的变量,并且其聚集是对变量对预测的重要性的决定因素,而不是存活。 survhap(t)是模型不可屈服的,可以应用于具有功能输出的所有型号。我们在http://github.com/mi2datalab/survshap中提供了python中时间相关解释的可访问实现。
translated by 谷歌翻译
A quantitative assessment of the global importance of an agent in a team is as valuable as gold for strategists, decision-makers, and sports coaches. Yet, retrieving this information is not trivial since in a cooperative task it is hard to isolate the performance of an individual from the one of the whole team. Moreover, it is not always clear the relationship between the role of an agent and his personal attributes. In this work we conceive an application of the Shapley analysis for studying the contribution of both agent policies and attributes, putting them on equal footing. Since the computational complexity is NP-hard and scales exponentially with the number of participants in a transferable utility coalitional game, we resort to exploiting a-priori knowledge about the rules of the game to constrain the relations between the participants over a graph. We hence propose a method to determine a Hierarchical Knowledge Graph of agents' policies and features in a Multi-Agent System. Assuming a simulator of the system is available, the graph structure allows to exploit dynamic programming to assess the importances in a much faster way. We test the proposed approach in a proof-of-case environment deploying both hardcoded policies and policies obtained via Deep Reinforcement Learning. The proposed paradigm is less computationally demanding than trivially computing the Shapley values and provides great insight not only into the importance of an agent in a team but also into the attributes needed to deploy the policy at its best.
translated by 谷歌翻译
许多增强学习(RL)环境包括独立实体,这些实体稀疏地互动。在这种环境中,RL代理商在任何特定情况下对其他实体的影响仅受限。我们在这项工作中的想法是,通过了解代理人可以通过其行动的何时以及何时何地效力,可以有效地指导。为实现这一目标,我们根据条件互信息介绍\ emph {情况依赖性因果影响},并表明它可以可靠地检测影响的态度。然后,我们提出了几种方法将这种措施集成到RL算法中,以改善探索和禁止政策学习。所有修改的算法都显示出机器人操纵任务的数据效率强劲增加。
translated by 谷歌翻译
Reinforcement Learning (RL) is a popular machine learning paradigm where intelligent agents interact with the environment to fulfill a long-term goal. Driven by the resurgence of deep learning, Deep RL (DRL) has witnessed great success over a wide spectrum of complex control tasks. Despite the encouraging results achieved, the deep neural network-based backbone is widely deemed as a black box that impedes practitioners to trust and employ trained agents in realistic scenarios where high security and reliability are essential. To alleviate this issue, a large volume of literature devoted to shedding light on the inner workings of the intelligent agents has been proposed, by constructing intrinsic interpretability or post-hoc explainability. In this survey, we provide a comprehensive review of existing works on eXplainable RL (XRL) and introduce a new taxonomy where prior works are clearly categorized into model-explaining, reward-explaining, state-explaining, and task-explaining methods. We also review and highlight RL methods that conversely leverage human knowledge to promote learning efficiency and performance of agents while this kind of method is often ignored in XRL field. Some challenges and opportunities in XRL are discussed. This survey intends to provide a high-level summarization of XRL and to motivate future research on more effective XRL solutions. Corresponding open source codes are collected and categorized at https://github.com/Plankson/awesome-explainable-reinforcement-learning.
translated by 谷歌翻译
强化学习是机器人抓握的一种有前途的方法,因为它可以在困难的情况下学习有效的掌握和掌握政策。但是,由于问题的高维度,用精致的机器人手来实现类似人类的操纵能力是具有挑战性的。尽管可以采用奖励成型或专家示范等补救措施来克服这个问题,但它们通常导致过分简化和有偏见的政策。我们介绍了Dext-Gen,这是一种在稀疏奖励环境中灵巧抓握的强化学习框架,适用于各种抓手,并学习无偏见和复杂的政策。通过平滑方向表示实现了抓地力和物体的完全方向控制。我们的方法具有合理的培训时间,并提供了包括所需先验知识的选项。模拟实验证明了框架对不同方案的有效性和适应性。
translated by 谷歌翻译
特征属性是用于模型解释的常见范例,因为它们在为模型分配每个输入特征的单个数字分数时是简单的。在可操作的追索范围中,其中解释的目标是改善模型消费者的结果,通常不清楚应该如何正确使用特征归因。通过这项工作,我们的目标是加强和澄清可操作追索和特征归因之间的联系。具体地,我们提出了一种Shap,CoShap的变种,它使用反事实生成技术来生产背景数据集以便在边缘(A.K.a.介入)福利价值框架内使用。我们在使用朔芙值的特征归属时仔细考虑的可动手追索程序设置中的需求,同时涉及单调的要求,具有许多合成示例。此外,我们通过提出和证明要素归属,反事实能力的定量评分来展示COSHAP的功效,表明如通过该指标测量,Coshap优于使用单调树集合在公共数据集上进行评估时的现有方法。
translated by 谷歌翻译
在人类循环机器学习应用程序的背景下,如决策支持系统,可解释性方法应在不使用户等待的情况下提供可操作的见解。在本文中,我们提出了加速的模型 - 不可知论解释(ACME),一种可解释的方法,即在全球和本地层面迅速提供特征重要性分数。可以将acme应用于每个回归或分类模型的后验。 ACME计算功能排名不仅提供了一个什么,但它还提供了一个用于评估功能值的变化如何影响模型预测的原因 - 如果分析工具。我们评估了综合性和现实世界数据集的建议方法,同时也与福芙添加剂解释(Shap)相比,我们制作了灵感的方法,目前是最先进的模型无关的解释性方法。我们在生产解释的质量方面取得了可比的结果,同时急剧减少计算时间并为全局和局部解释提供一致的可视化。为了促进该领域的研究,为重复性,我们还提供了一种存储库,其中代码用于实验。
translated by 谷歌翻译
Besides accuracy, recent studies on machine learning models have been addressing the question on how the obtained results can be interpreted. Indeed, while complex machine learning models are able to provide very good results in terms of accuracy even in challenging applications, it is difficult to interpret them. Aiming at providing some interpretability for such models, one of the most famous methods, called SHAP, borrows the Shapley value concept from game theory in order to locally explain the predicted outcome of an instance of interest. As the SHAP values calculation needs previous computations on all possible coalitions of attributes, its computational cost can be very high. Therefore, a SHAP-based method called Kernel SHAP adopts an efficient strategy that approximate such values with less computational effort. In this paper, we also address local interpretability in machine learning based on Shapley values. Firstly, we provide a straightforward formulation of a SHAP-based method for local interpretability by using the Choquet integral, which leads to both Shapley values and Shapley interaction indices. Moreover, we also adopt the concept of $k$-additive games from game theory, which contributes to reduce the computational effort when estimating the SHAP values. The obtained results attest that our proposal needs less computations on coalitions of attributes to approximate the SHAP values.
translated by 谷歌翻译
即使有效,模型的使用也必须伴随着转换数据的各个级别的理解(上游和下游)。因此,需求增加以定义单个数据与算法可以根据其分析可以做出的选择(例如,一种产品或一种促销报价的建议,或代表风险的保险费率)。模型用户必须确保模型不会区分,并且也可以解释其结果。本文介绍了模型解释的重要性,并解决了模型透明度的概念。在保险环境中,它专门说明了如何使用某些工具来强制执行当今可以利用机器学习的精算模型的控制。在一个简单的汽车保险中损失频率估计的示例中,我们展示了一些解释性方法的兴趣,以适应目标受众的解释。
translated by 谷歌翻译
Adequately assigning credit to actions for future outcomes based on their contributions is a long-standing open challenge in Reinforcement Learning. The assumptions of the most commonly used credit assignment method are disadvantageous in tasks where the effects of decisions are not immediately evident. Furthermore, this method can only evaluate actions that have been selected by the agent, making it highly inefficient. Still, no alternative methods have been widely adopted in the field. Hindsight Credit Assignment is a promising, but still unexplored candidate, which aims to solve the problems of both long-term and counterfactual credit assignment. In this thesis, we empirically investigate Hindsight Credit Assignment to identify its main benefits, and key points to improve. Then, we apply it to factored state representations, and in particular to state representations based on the causal structure of the environment. In this setting, we propose a variant of Hindsight Credit Assignment that effectively exploits a given causal structure. We show that our modification greatly decreases the workload of Hindsight Credit Assignment, making it more efficient and enabling it to outperform the baseline credit assignment method on various tasks. This opens the way to other methods based on given or learned causal structures.
translated by 谷歌翻译
In recent years, advances in deep learning have resulted in a plethora of successes in the use of reinforcement learning (RL) to solve complex sequential decision tasks with high-dimensional inputs. However, existing systems lack the necessary mechanisms to provide humans with a holistic view of their competence, presenting an impediment to their adoption, particularly in critical applications where the decisions an agent makes can have significant consequences. Yet, existing RL-based systems are essentially competency-unaware in that they lack the necessary interpretation mechanisms to allow human operators to have an insightful, holistic view of their competency. In this paper, we extend a recently-proposed framework for explainable RL that is based on analyses of "interestingness." Our new framework provides various measures of RL agent competence stemming from interestingness analysis and is applicable to a wide range of RL algorithms. We also propose novel mechanisms for assessing RL agents' competencies that: 1) identify agent behavior patterns and competency-controlling conditions by clustering agent behavior traces solely using interestingness data; and 2) identify the task elements mostly responsible for an agent's behavior, as measured through interestingness, by performing global and local analyses using SHAP values. Overall, our tools provide insights about RL agent competence, both their capabilities and limitations, enabling users to make more informed decisions about interventions, additional training, and other interactions in collaborative human-machine settings.
translated by 谷歌翻译
Shap是一种衡量机器学习模型中可变重要性的流行方法。在本文中,我们研究了用于估计外形评分的算法,并表明它是功能性方差分析分解的转换。我们使用此连接表明,在Shap近似中的挑战主要与选择功能分布的选择以及估计的$ 2^p $ ANOVA条款的数量有关。我们认为,在这种情况下,机器学习解释性和敏感性分析之间的联系是有照明的,但是直接的实际后果并不明显,因为这两个领域面临着不同的约束。机器学习的解释性问题模型可评估,但通常具有数百个(即使不是数千个)功能。敏感性分析通常处理物理或工程的模型,这些模型可能非常耗时,但在相对较小的输入空间上运行。
translated by 谷歌翻译
基于树的算法,如随机森林和渐变增强树,继续成为多学科最受欢迎和强大的机器学习模型之一。估计基于树模型中特征的影响的传统智慧是测量\脑缩小{节目减少损失函数},(i)仅收集全球重要性措施和(ii)遭受严重影响偏见。条件特征贡献(CFC)通过遵循决策路径并将模型的预期输出的更改归因于路径的每个功能,提供对预测的\ yourceit {local},逐个案例说明。但是,Lundberg等人。指出了CFC的潜在偏见,这取决于与树根的距离。现在是现在非常受欢迎的替代方案,福芙添加剂解释(Shap)值似乎减轻了这种偏差,但计算得多更昂贵。在这里,我们有助于对两种公开可用的分类问题的两种方法计算的解释进行了彻底的比较,以便向当前研究人员提供数据驱动算法的建议。对于随机森林,我们发现本地和全球形状值和CFC分数的极高相似之处和相关性,导致非常相似的排名和解释。类似的结论对于使用全局特征重要性分数的保真度作为与每个特征相关的预测电力的代理。
translated by 谷歌翻译
基于Shapley值的功能归因在解释机器学习模型中很受欢迎。但是,从理论和计算的角度来看,它们的估计是复杂的。我们将这种复杂性分解为两个因素:(1)〜删除特征信息的方法,以及(2)〜可拖动估计策略。这两个因素提供了一种天然镜头,我们可以更好地理解和比较24种不同的算法。基于各种特征删除方法,我们描述了多种类型的Shapley值特征属性和计算每个类型的方法。然后,基于可进行的估计策略,我们表征了两个不同的方法家族:模型 - 不合时宜的和模型特定的近似值。对于模型 - 不合稳定的近似值,我们基准了广泛的估计方法,并将其与Shapley值的替代性但等效的特征联系起来。对于特定于模型的近似值,我们阐明了对每种方法的线性,树和深模型的障碍至关重要的假设。最后,我们确定了文献中的差距以及有希望的未来研究方向。
translated by 谷歌翻译