尽管深度神经网络(DNNS)具有很大的概括和预测能力,但它们的功能不允许对其行为进行详细的解释。不透明的深度学习模型越来越多地用于在关键环境中做出重要的预测,而危险在于,它们做出和使用不能合理或合法化的预测。已经出现了几种可解释的人工智能(XAI)方法,这些方法与机器学习模型分开了,但对模型的实际功能和鲁棒性具有忠诚的缺点。结果,就具有解释能力的深度学习模型的重要性达成了广泛的协议,因此他们自己可以为为什么做出特定的预测提供答案。首先,我们通过形式化解释是什么是缺乏XAI的普遍标准的问题。我们还引入了一组公理和定义,以从数学角度阐明XAI。最后,我们提出了Greybox XAI,该框架由于使用了符号知识库(KB)而构成DNN和透明模型。我们从数据集中提取KB,并使用它来训练透明模型(即逻辑回归)。在RGB图像上训练了编码器 - 编码器架构,以产生类似于透明模型使用的KB的输出。一旦两个模型被独立训练,它们就会在组合上使用以形成可解释的预测模型。我们展示了这种新体系结构在几个数据集中如何准确且可解释的。
translated by 谷歌翻译
去年的特征是不透明的自动决策支持系统(例如深神经网络(DNNS))激增。尽管它们具有出色的概括和预测技能,但其功能不允许对其行为获得详细的解释。由于不透明的机器学习模型越来越多地用于在关键环境中做出重要的预测,因此危险是创建和使用不合理或合法的决策。因此,关于赋予机器学习模型具有解释性的重要性有一个普遍的共识。可解释的人工智能(XAI)技术可以用来验证和认证模型输出,并以可信赖,问责制,透明度和公平等理想的概念来增强它们。本指南旨在成为任何具有计算机科学背景的受众的首选手册,旨在获得对机器学习模型的直观见解,并伴随着笔直,快速和直观的解释。本文旨在通过在其特定的日常型号,数据集和用例中应用XAI技术来填补缺乏引人注目的XAI指南。图1充当读者的流程图/地图,应帮助他根据自己的数据类型找到理想的使用方法。在每章中,读者将找到所提出的方法的描述,以及在生物医学应用程序和Python笔记本上使用的示例。它可以轻松修改以应用于特定应用程序。
translated by 谷歌翻译
In the last few years, Artificial Intelligence (AI) has achieved a notable momentum that, if harnessed appropriately, may deliver the best of expectations over many application sectors across the field. For this to occur shortly in Machine Learning, the entire community stands in front of the barrier of explainability, an inherent problem of the latest techniques brought by sub-symbolism (e.g. ensembles or Deep Neural Networks) that were not present in the last hype of AI (namely, expert systems and rule based models). Paradigms underlying this problem fall within the so-called eXplainable AI (XAI) field, which is widely acknowledged as a crucial feature for the practical deployment of AI models. The overview presented in this article examines the existing literature and contributions already done in the field of XAI, including a prospect toward what is yet to be reached. For this purpose we summarize previous efforts made to define explainability in Machine Learning, establishing a novel definition of explainable Machine Learning that covers such prior conceptual propositions with a major focus on the audience for which the explainability is sought. Departing from this definition, we propose and discuss about a taxonomy of recent contributions related to the explainability of different Machine Learning models, including those aimed at explaining Deep Learning methods for which a second dedicated taxonomy is built and examined in detail. This critical literature analysis serves as the motivating background for a series of challenges faced by XAI, such as the interesting crossroads of data fusion and explainability. Our prospects lead toward the concept of Responsible Artificial Intelligence, namely, a methodology for the large-scale implementation of AI methods in real organizations with fairness, model explainability and accountability at its core. Our ultimate goal is to provide newcomers to the field of XAI with a thorough taxonomy that can serve as reference material in order to stimulate future research advances, but also to encourage experts and professionals from other disciplines to embrace the benefits of AI in their activity sectors, without any prior bias for its lack of interpretability.
translated by 谷歌翻译
The automated synthesis of correct-by-construction Boolean functions from logical specifications is known as the Boolean Functional Synthesis (BFS) problem. BFS has many application areas that range from software engineering to circuit design. In this paper, we introduce a tool BNSynth, that is the first to solve the BFS problem under a given bound on the solution space. Bounding the solution space induces the synthesis of smaller functions that benefit resource constrained areas such as circuit design. BNSynth uses a counter-example guided, neural approach to solve the bounded BFS problem. Initial results show promise in synthesizing smaller solutions; we observe at least \textbf{3.2X} (and up to \textbf{24X}) improvement in the reduction of solution size on average, as compared to state of the art tools on our benchmarks. BNSynth is available on GitHub under an open source license.
translated by 谷歌翻译
In recent years, denoising diffusion models have demonstrated outstanding image generation performance. The information on natural images captured by these models is useful for many image reconstruction applications, where the task is to restore a clean image from its degraded observations. In this work, we propose a conditional sampling scheme that exploits the prior learned by diffusion models while retaining agreement with the observations. We then combine it with a novel approach for adapting pretrained diffusion denoising networks to their input. We examine two adaption strategies: the first uses only the degraded image, while the second, which we advocate, is performed using images that are ``nearest neighbors'' of the degraded image, retrieved from a diverse dataset using an off-the-shelf visual-language model. To evaluate our method, we test it on two state-of-the-art publicly available diffusion models, Stable Diffusion and Guided Diffusion. We show that our proposed `adaptive diffusion for image reconstruction' (ADIR) approach achieves a significant improvement in the super-resolution, deblurring, and text-based editing tasks.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
Business documents come in a variety of structures, formats and information needs which makes information extraction a challenging task. Due to these variations, having a document generic model which can work well across all types of documents and for all the use cases seems far-fetched. For document-specific models, we would need customized document-specific labels. We introduce DoSA (Document Specific Automated Annotations), which helps annotators in generating initial annotations automatically using our novel bootstrap approach by leveraging document generic datasets and models. These initial annotations can further be reviewed by a human for correctness. An initial document-specific model can be trained and its inference can be used as feedback for generating more automated annotations. These automated annotations can be reviewed by human-in-the-loop for the correctness and a new improved model can be trained using the current model as pre-trained model before going for the next iteration. In this paper, our scope is limited to Form like documents due to limited availability of generic annotated datasets, but this idea can be extended to a variety of other documents as more datasets are built. An open-source ready-to-use implementation is made available on GitHub https://github.com/neeleshkshukla/DoSA.
translated by 谷歌翻译
扩散模型是一类生成模型,与其他生成模型相比,在自然图像数据集训练时,在创建逼真的图像时表现出了出色的性能。我们引入了Dispr,这是一个基于扩散的模型,用于解决从二维(2D)单细胞显微镜图像预测三维(3D)细胞形状的反问题。使用2D显微镜图像作为先验,因此可以根据预测现实的3D形状重建条件。为了在基于功能的单细胞分类任务中展示DIPPR作为数据增强工具的适用性,我们从分组为六个高度不平衡类的单元中提取形态特征。将DISPR预测的功能添加到三个少数类别,将宏F1分数从$ f1_ \ text {macro} = 55.2 \ pm 4.6 \%$ to $ f1_ \%$ to $ f1_ \ text {macro} = 72.2 \ pm 4.9 \%$。由于我们的方法是在这种情况下第一个采用基于扩散的模型的方法,因此我们证明了扩散模型可以应用于3D中的反问题,并且他们学会了从2D显微镜图像中重建具有现实的形态特征的3D形状。
translated by 谷歌翻译
合成图像合成的巨大进展使得面部图像在高分辨率和光真实主义中产生。在生物识别应用中,使用合成数据的主要动机是解决公共可用生物识别数据的短缺,同时在处理此类敏感信息时降低隐私风险。这些优点在这项工作中被利用,通过模拟近期面部年龄修饰算法以生成交配样本,从而研究衰老对开源生物识别识别系统的性能的影响。此外,实际数据集用于评估短期衰老的影响,将生物识别性能与合成结构域进行比较。主要发现表明,短期老化在1 - 5年的范围内仅对一般识别绩效产生较小的影响。但是,对长期年龄差异超过20年的配对面的正确验证仍然是一个重大挑战,需要进一步调查。
translated by 谷歌翻译
本文介绍了基于2022年国际生物识别技术联合会议(IJCB 2022)举行的基于隐私感知合成训练数据(SYN-MAD)的面部变形攻击检测的摘要。该竞赛吸引了来自学术界和行业的12个参与团队,并在11个不同的国家 /地区举行。最后,参与团队提交了七个有效的意见书,并由组织者进行评估。竞争是为了介绍和吸引解决方案的解决方案,这些解决方案涉及检测面部变形攻击的同时,同时出于道德和法律原因保护人们的隐私。为了确保这一点,培训数据仅限于组织者提供的合成数据。提交的解决方案提出了创新,导致在许多实验环境中表现优于所考虑的基线。评估基准现在可在以下网址获得:https://github.com/marcohuber/syn-mad-2022。
translated by 谷歌翻译