粒子流(PF)算法用于通用粒子检测器中,通过组合来自不同子目录的信息来重建碰撞的综合粒子级视图。已经开发出作为机器学习粒子流(MLPF)算法的图形神经网络(GNN)模型,以替代基于规则的PF算法。但是,了解模型的决策并不简单,特别是鉴于设定的预测任务,动态图形构建和消息传递步骤的复杂性。在本文中,我们适应了GNN的层状相关性传播技术,并将其应用于MLPF算法,以衡量相关节点和特征的预测。通过这个过程,我们深入了解模型的决策。
translated by 谷歌翻译
神经网络在高能量物理学研究中无处不在。但是,这些高度非线性的参数化函数被视为\ textit {black box} - 其内部运作以传达信息并构建所需的输入输出关系通常是棘手的。可解释的AI(XAI)方法可用于确定神经模型与数据的关系,以通过在输入与模型输出之间建立定量和可拖动的关系来使其与数据\ TextIt {cundinal {ablesable}。在这封信中,我们探讨了在高能量物理学问题的背景下使用XAI方法的潜力。
translated by 谷歌翻译
Recent developments in the methods of explainable AI (XAI) methods allow researchers to explore the inner workings of deep neural networks (DNNs), revealing crucial information about input-output relationships and realizing how data connects with machine learning models. In this paper we explore interpretability of DNN models designed to identify jets coming from top quark decay in high energy proton-proton collisions at the Large Hadron Collider (LHC). We review a subset of existing top tagger models and explore different quantitative methods to identify which features play the most important roles in identifying the top jets. We also investigate how and why feature importance varies across different XAI metrics, how feature correlations impact their explainability, and how latent space representations encode information as well as correlate with physically meaningful quantities. Our studies uncover some major pitfalls of existing XAI methods and illustrate how they can be overcome to obtain consistent and meaningful interpretation of these models. We additionally illustrate the activity of hidden layers as Neural Activation Pattern (NAP) diagrams and demonstrate how they can be used to understand how DNNs relay information across the layers and how this understanding can help to make such models significantly simpler by allowing effective model reoptimization and hyperparameter tuning. By incorporating observations from the interpretability studies, we obtain state-of-the-art top tagging performance from augmented implementation of existing network
translated by 谷歌翻译
最近的工作已经证明了图形神经网络(GNN)等几何深度学习方法非常适合于在高能粒子物理学中解决各种重建问题。特别地,粒子跟踪数据通过识别硅跟踪器命中作为节点和粒子轨迹作为边缘来自然表示为曲线图;给定一组假设的边缘,边缘分类GNN标识与真实粒子轨迹相对应的那些。在这项工作中,我们将物理激励的相互作用网络(IN)GNN调整为与高亮度大强子撞机的预期相似的填充条件中的粒子跟踪问题。假设在各种粒子矩阈值下进行理想化的击中过滤,我们通过在基于GNN的跟踪的每个阶段进行了一系列测量来展示了优异的边缘分类精度和跟踪效率:图形结构,边缘分类和轨道建筑。建议的建筑基本上比以前研究的GNN跟踪架构小幅小;这尤其希望,因为大小的减小对于在受约束的计算环境中实现基于GNN的跟踪至关重要。此外,可以将其表示为一组显式矩阵操作或传递GNN的消息。正在进行努力,以通过异构计算资源朝向高级和低延迟触发应用程序加速每个表示。
translated by 谷歌翻译
AutoEncoders在异常检测中具有高能物理学中的有用应用,特别是对于喷气机 - 在碰撞中产生的颗粒的准直淋浴,例如Cern大型强子撞机的碰撞。我们探讨了基于图形的AutoEncoders,它们在其“粒子云”表示中的喷射器上运行,并且可以在喷气机内的粒子中利用相互依存的依赖性,用于这种任务。另外,我们通过图形神经网络对能量移动器的距离开发可差的近似,这随后可以用作自动化器的重建损耗函数。
translated by 谷歌翻译
图形神经网络(GNN)已证明图形数据的预测性能显着提高。同时,这些模型的预测通常很难解释。在这方面,已经做出了许多努力来从gnnexplainer,XGNN和PGEXPlainer等角度解释这些模型的预测机制。尽管这样的作品呈现出系统的框架来解释GNN,但对于可解释的GNN的整体评论是不可用的。在这项调查中,我们介绍了针对GNN开发的解释性技术的全面综述。我们专注于可解释的图形神经网络,并根据可解释方法的使用对它们进行分类。我们进一步为GNNS解释提供了共同的性能指标,并指出了几个未来的研究指标。
translated by 谷歌翻译
The findable, accessible, interoperable, and reusable (FAIR) data principles have provided a framework for examining, evaluating, and improving how we share data with the aim of facilitating scientific discovery. Efforts have been made to generalize these principles to research software and other digital products. Artificial intelligence (AI) models -- algorithms that have been trained on data rather than explicitly programmed -- are an important target for this because of the ever-increasing pace with which AI is transforming scientific and engineering domains. In this paper, we propose a practical definition of FAIR principles for AI models and create a FAIR AI project template that promotes adherence to these principles. We demonstrate how to implement these principles using a concrete example from experimental high energy physics: a graph neural network for identifying Higgs bosons decaying to bottom quarks. We study the robustness of these FAIR AI models and their portability across hardware architectures and software frameworks, and report new insights on the interpretability of AI predictions by studying the interplay between FAIR datasets and AI models. Enabled by publishing FAIR AI models, these studies pave the way toward reliable and automated AI-driven scientific discovery.
translated by 谷歌翻译
在CERN大强子撞机(LHC)的碰撞中的带电粒子轨迹的测定是一个重要但挑战性的问题,特别是在LHC(HL-LHC)的未来高亮度相期间的高相互作用密度条件下。图形神经网络(GNNS)是一种类型的几何深度学习算法,通过将跟踪器数据嵌入作为图形节点来成功应用于此任务的几何深度学习算法,而边缘表示可能的曲线段 - 并将边缘分类为真实或假轨道段。但是,由于其大量的计算成本,它们在基于硬件或软件的触发器应用中的研究受到限制。在本文中,我们介绍了一个自动翻译工作流程,集成到一个名为$ \ texttt {hls4ml} $的更广泛的工具中,用于将GNN转换为现场可编程门阵列(FPGA)的固件。我们使用此翻译工具实现用于带电粒子跟踪的GNN,使用TrackML挑战DataSet在FPGA上培训,其中设计针对不同的图表大小,任务复杂和延迟/吞吐量要求。该工作可以在HL-LHC实验的触发水平下纳入带电粒子跟踪GNN。
translated by 谷歌翻译
在这项工作中,我们提出了一种神经方法,用于重建描述层次相互作用的生根树图,使用新颖的表示,我们将其称为最低的共同祖先世代(LCAG)矩阵。这种紧凑的配方等效于邻接矩阵,但是如果直接使用邻接矩阵,则可以单独从叶子中学习树的结构,而无需先前的假设。因此,采用LCAG启用了第一个端到端的可训练解决方案,该解决方案仅使用末端树叶直接学习不同树大小的层次结构。在高能量粒子物理学的情况下,粒子衰减形成了分层树结构,只能通过实验观察到最终产物,并且可能的树的大型组合空间使分析溶液变得很棘手。我们证明了LCAG用作使用变压器编码器和神经关系编码器编码器图神经网络的模拟粒子物理衰减结构的任务。采用这种方法,我们能够正确预测LCAG纯粹是从叶子特征中的LCAG,最大树深度为$ 8 $ in $ 92.5 \%\%的树木箱子,最高$ 6 $叶子(包括)和$ 59.7 \%\%\%\%的树木$在我们的模拟数据集中$ 10 $。
translated by 谷歌翻译
Recently methods of graph neural networks (GNNs) have been applied to solving the problems in high energy physics (HEP) and have shown its great potential for quark-gluon tagging with graph representation of jet events. In this paper, we introduce an approach of GNNs combined with a HaarPooling operation to analyze the events, called HaarPooling Message Passing neural network (HMPNet). In HMPNet, HaarPooling not only extract the features of graph, but also embed additional information obtained by clustering of k-means of different particle observables. We construct Haarpooling from three different observables: absolute energy $\log E$, transverse momentum $\log p_T$ , and relative coordinates $(\Delta\eta,\Delta\phi)$, then discuss their impacts on the tagging and compare the results with those obtained via MPNN and ParticleNet (PN). The results show that an appropriate selection of information for HaarPooling enhance the accuracy of quark-gluon tagging, for adding extra information of $\log P_T$ to the HMPNet outperforms all the others, meanwhile adding relative coordinates information $(\Delta\eta,\Delta\phi)$ is not very beneficial.
translated by 谷歌翻译
ICECUBE是一种用于检测1 GEV和1 PEV之间大气和天体中微子的光学传感器的立方公斤阵列,该阵列已部署1.45 km至2.45 km的南极的冰盖表面以下1.45 km至2.45 km。来自ICE探测器的事件的分类和重建在ICeCube数据分析中起着核心作用。重建和分类事件是一个挑战,这是由于探测器的几何形状,不均匀的散射和冰中光的吸收,并且低于100 GEV的光,每个事件产生的信号光子数量相对较少。为了应对这一挑战,可以将ICECUBE事件表示为点云图形,并将图形神经网络(GNN)作为分类和重建方法。 GNN能够将中微子事件与宇宙射线背景区分开,对不同的中微子事件类型进行分类,并重建沉积的能量,方向和相互作用顶点。基于仿真,我们提供了1-100 GEV能量范围的比较与当前ICECUBE分析中使用的当前最新最大似然技术,包括已知系统不确定性的影响。对于中微子事件分类,与当前的IceCube方法相比,GNN以固定的假阳性速率(FPR)提高了信号效率的18%。另外,GNN在固定信号效率下将FPR的降低超过8(低于半百分比)。对于能源,方向和相互作用顶点的重建,与当前最大似然技术相比,分辨率平均提高了13%-20%。当在GPU上运行时,GNN能够以几乎是2.7 kHz的中位数ICECUBE触发速率的速率处理ICECUBE事件,这打开了在在线搜索瞬态事件中使用低能量中微子的可能性。
translated by 谷歌翻译
除了机器学习(ML)模型的令人印象深刻的预测力外,最近还出现了解释方法,使得能够解释诸如深神经网络的复杂非线性学习模型。获得更好的理解尤其重要。对于安全 - 关键的ML应用或医学诊断等。虽然这种可解释的AI(XAI)技术对分类器达到了重大普及,但到目前为止对XAI的重点进行了很少的关注(Xair)。在这篇综述中,我们澄清了XAI对回归和分类任务的基本概念差异,为Xair建立了新的理论见解和分析,为Xair提供了真正的实际回归问题的示范,最后讨论了该领域仍然存在的挑战。
translated by 谷歌翻译
机器学习在加强和加速寻求新基本物理学方面发挥着至关重要的作用。我们审查了新物理学的机器学习方法和应用中,在地面高能量物理实验的背景下,包括大型强子撞机,罕见的事件搜索和中微生实验。虽然机器学习在这些领域拥有悠久的历史,但深入学习革命(2010年代初)就研究的范围和雄心而产生了定性转变。这些现代化的机器学习发展是本综述的重点。
translated by 谷歌翻译
In this article, we use artificial intelligence algorithms to show how to enhance the resolution of the elementary particle track fitting in inhomogeneous dense detectors, such as plastic scintillators. We use deep learning to replace more traditional Bayesian filtering methods, drastically improving the reconstruction of the interacting particle kinematics. We show that a specific form of neural network, inherited from the field of natural language processing, is very close to the concept of a Bayesian filter that adopts a hyper-informative prior. Such a paradigm change can influence the design of future particle physics experiments and their data exploitation.
translated by 谷歌翻译
了解晕星连接是基本的,以提高我们对暗物质的性质和性质的知识。在这项工作中,我们构建一个模型,鉴于IT主机的星系的位置,速度,恒星群体和半径的位置。为了捕获来自星系属性的相关性及其相位空间的相关信息,我们使用图形神经网络(GNN),该网络设计用于使用不规则和稀疏数据。我们从宇宙学和天体物理学中培训了我们在Galaxies上的模型,从宇宙学和天体物理学与机器学习模拟(骆驼)项目。我们的模型,占宇宙学和天体物理的不确定性,能够用$ \ SIM 0.2欧元的准确度来限制晕群。此外,在一套模拟上培训的GNN能够在用利用不同的代码的模拟上进行测试时保留其精度的一部分精度。 GNN的Pytorch几何实现在HTTPS://github.com/pablovd/halographnet上公开可用于github上
translated by 谷歌翻译
图表神经网络(GNNS)最近在人工智能(AI)领域的普及,这是由于它们作为输入数据相对非结构化数据类型的独特能力。尽管GNN架构的一些元素在概念上类似于传统神经网络(以及神经网络变体)的操作中,但是其他元件代表了传统深度学习技术的偏离。本教程通过整理和呈现有关GNN最常见和性能变种的动机,概念,数学和应用的细节,将GNN的权力和新颖性暴露给AI从业者。重要的是,我们简明扼要地向实际示例提出了本教程,从而为GNN的主题提供了实用和可访问的教程。
translated by 谷歌翻译
人工智能(AI)和机器学习(ML)在网络安全挑战中的应用已在行业和学术界的吸引力,部分原因是对关键系统(例如云基础架构和政府机构)的广泛恶意软件攻击。入侵检测系统(IDS)使用某些形式的AI,由于能够以高预测准确性处理大量数据,因此获得了广泛的采用。这些系统托管在组织网络安全操作中心(CSOC)中,作为一种防御工具,可监视和检测恶意网络流,否则会影响机密性,完整性和可用性(CIA)。 CSOC分析师依靠这些系统来决定检测到的威胁。但是,使用深度学习(DL)技术设计的IDS通常被视为黑匣子模型,并且没有为其预测提供理由。这为CSOC分析师造成了障碍,因为他们无法根据模型的预测改善决策。解决此问题的一种解决方案是设计可解释的ID(X-IDS)。这项调查回顾了可解释的AI(XAI)的最先进的ID,目前的挑战,并讨论了这些挑战如何涉及X-ID的设计。特别是,我们全面讨论了黑匣子和白盒方法。我们还在这些方法之间的性能和产生解释的能力方面提出了权衡。此外,我们提出了一种通用体系结构,该建筑认为人类在循环中,该架构可以用作设计X-ID时的指南。研究建议是从三个关键观点提出的:需要定义ID的解释性,需要为各种利益相关者量身定制的解释以及设计指标来评估解释的需求。
translated by 谷歌翻译
在2015年和2019年之间,地平线的成员2020年资助的创新培训网络名为“Amva4newphysics”,研究了高能量物理问题的先进多变量分析方法和统计学习工具的定制和应用,并开发了完全新的。其中许多方法已成功地用于提高Cern大型Hadron撞机的地图集和CMS实验所执行的数据分析的敏感性;其他几个人,仍然在测试阶段,承诺进一步提高基本物理参数测量的精确度以及新现象的搜索范围。在本文中,在研究和开发的那些中,最相关的新工具以及对其性能的评估。
translated by 谷歌翻译
There has been significant work recently in developing machine learning models in high energy physics (HEP), for tasks such as classification, simulation, and anomaly detection. Typically, these models are adapted from those designed for datasets in computer vision or natural language processing without necessarily incorporating inductive biases suited to HEP data, such as respecting its inherent symmetries. Such inductive biases can make the model more performant and interpretable, and reduce the amount of training data needed. To that end, we develop the Lorentz group autoencoder (LGAE), an autoencoder model equivariant with respect to the proper, orthochronous Lorentz group $\mathrm{SO}^+(3,1)$, with a latent space living in the representations of the group. We present our architecture and several experimental results on jets at the LHC and find it significantly outperforms a non-Lorentz-equivariant graph neural network baseline on compression and reconstruction, and anomaly detection. We also demonstrate the advantage of such an equivariant model in analyzing the latent space of the autoencoder, which can have a significant impact on the explainability of anomalies found by such black-box machine learning models.
translated by 谷歌翻译
Graph Neural Networks (GNNs) are a powerful tool for machine learning on graphs. GNNs combine node feature information with the graph structure by recursively passing neural messages along edges of the input graph. However, incorporating both graph structure and feature information leads to complex models and explaining predictions made by GNNs remains unsolved. Here we propose GNNEXPLAINER, the first general, model-agnostic approach for providing interpretable explanations for predictions of any GNN-based model on any graph-based machine learning task. Given an instance, GNNEXPLAINER identifies a compact subgraph structure and a small subset of node features that have a crucial role in GNN's prediction. Further, GNNEXPLAINER can generate consistent and concise explanations for an entire class of instances. We formulate GNNEXPLAINER as an optimization task that maximizes the mutual information between a GNN's prediction and distribution of possible subgraph structures. Experiments on synthetic and real-world graphs show that our approach can identify important graph structures as well as node features, and outperforms alternative baseline approaches by up to 43.0% in explanation accuracy. GNNEXPLAINER provides a variety of benefits, from the ability to visualize semantically relevant structures to interpretability, to giving insights into errors of faulty GNNs.
translated by 谷歌翻译