喷气标记是粒子物理学中的一项关键但具有挑战性的分类任务。尽管深度学习已经改变了喷气标记并显着提高了性能,但缺乏大规模的公共数据集阻碍了进一步的增强。在这项工作中,我们提出了JetClass,这是一种用于喷气标记的新综合数据集。 JETCLASS数据集由100 M喷气机组成,比现有公共数据集大约两个数量级。总共模拟了10种类型的喷气机,包括到目前为止未探索用于标记的几种类型。基于大型数据集,我们提出了一种用于喷射标记的新的基于变压器的体系结构,称为“粒子变压器”(部分)。通过将成对的粒子相互作用纳入注意机制,部分可以达到比普通变压器更高的标记性能,并超过了先前最新的颗粒,颗粒的幅度很大。一旦进行了微调,预先训练的零件模型也大大提高了两个广泛采用的喷气标记基准的性能。数据集,代码和模型可在https://github.com/jet-universe/particle_transformer上公开获得。
translated by 谷歌翻译
在2015年和2019年之间,地平线的成员2020年资助的创新培训网络名为“Amva4newphysics”,研究了高能量物理问题的先进多变量分析方法和统计学习工具的定制和应用,并开发了完全新的。其中许多方法已成功地用于提高Cern大型Hadron撞机的地图集和CMS实验所执行的数据分析的敏感性;其他几个人,仍然在测试阶段,承诺进一步提高基本物理参数测量的精确度以及新现象的搜索范围。在本文中,在研究和开发的那些中,最相关的新工具以及对其性能的评估。
translated by 谷歌翻译
ICECUBE是一种用于检测1 GEV和1 PEV之间大气和天体中微子的光学传感器的立方公斤阵列,该阵列已部署1.45 km至2.45 km的南极的冰盖表面以下1.45 km至2.45 km。来自ICE探测器的事件的分类和重建在ICeCube数据分析中起着核心作用。重建和分类事件是一个挑战,这是由于探测器的几何形状,不均匀的散射和冰中光的吸收,并且低于100 GEV的光,每个事件产生的信号光子数量相对较少。为了应对这一挑战,可以将ICECUBE事件表示为点云图形,并将图形神经网络(GNN)作为分类和重建方法。 GNN能够将中微子事件与宇宙射线背景区分开,对不同的中微子事件类型进行分类,并重建沉积的能量,方向和相互作用顶点。基于仿真,我们提供了1-100 GEV能量范围的比较与当前ICECUBE分析中使用的当前最新最大似然技术,包括已知系统不确定性的影响。对于中微子事件分类,与当前的IceCube方法相比,GNN以固定的假阳性速率(FPR)提高了信号效率的18%。另外,GNN在固定信号效率下将FPR的降低超过8(低于半百分比)。对于能源,方向和相互作用顶点的重建,与当前最大似然技术相比,分辨率平均提高了13%-20%。当在GPU上运行时,GNN能够以几乎是2.7 kHz的中位数ICECUBE触发速率的速率处理ICECUBE事件,这打开了在在线搜索瞬态事件中使用低能量中微子的可能性。
translated by 谷歌翻译
大型强子撞机的不稳定沉重粒子的创造是解决物理学中最深处的最深处的最直接方式。碰撞通常产生可变尺寸的观察粒子,其具有固有的歧义,使观察到的颗粒的分配复杂于重质颗粒的腐烂产物。在物理界解决这些挑战的当前策略忽略了腐烂产品的物理对称,并考虑所有可能的分配排列,并不扩展到复杂的配置。基于注意的序列建模的深度学习方法在自然语言处理中取得了最先进的性能,但它们缺乏内置机制来处理物理集分配问题中发现的独特对称性。我们介绍了一种建构对称保护的新方法,用于保护对称保护的网络,反映问题的自然侵略者,以有效地找到任务而不评估所有排列。这种通用方法适用于任意复杂的配置,并且显着优于当前方法,提高了在典型的基准问题上的19 \%-35 \%之间的重建效率,同时在最复杂的事件上将推理时间减少两到五个数量级,使得许多重要和以前顽固的病例易腐烂。包含常规库的完整代码存储库,使用的特定配置和完整的数据集发布,是在https://github.com/alexanders101/spanet的avawaiable
translated by 谷歌翻译
在这项工作中,我们提出了一种神经方法,用于重建描述层次相互作用的生根树图,使用新颖的表示,我们将其称为最低的共同祖先世代(LCAG)矩阵。这种紧凑的配方等效于邻接矩阵,但是如果直接使用邻接矩阵,则可以单独从叶子中学习树的结构,而无需先前的假设。因此,采用LCAG启用了第一个端到端的可训练解决方案,该解决方案仅使用末端树叶直接学习不同树大小的层次结构。在高能量粒子物理学的情况下,粒子衰减形成了分层树结构,只能通过实验观察到最终产物,并且可能的树的大型组合空间使分析溶液变得很棘手。我们证明了LCAG用作使用变压器编码器和神经关系编码器编码器图神经网络的模拟粒子物理衰减结构的任务。采用这种方法,我们能够正确预测LCAG纯粹是从叶子特征中的LCAG,最大树深度为$ 8 $ in $ 92.5 \%\%的树木箱子,最高$ 6 $叶子(包括)和$ 59.7 \%\%\%\%的树木$在我们的模拟数据集中$ 10 $。
translated by 谷歌翻译
While the Transformer architecture has become the de-facto standard for natural language processing tasks, its applications to computer vision remain limited. In vision, attention is either applied in conjunction with convolutional networks, or used to replace certain components of convolutional networks while keeping their overall structure in place. We show that this reliance on CNNs is not necessary and a pure transformer applied directly to sequences of image patches can perform very well on image classification tasks. When pre-trained on large amounts of data and transferred to multiple mid-sized or small image recognition benchmarks (ImageNet, CIFAR-100, VTAB, etc.), Vision Transformer (ViT) attains excellent results compared to state-of-the-art convolutional networks while requiring substantially fewer computational resources to train. 1
translated by 谷歌翻译
机器学习在加强和加速寻求新基本物理学方面发挥着至关重要的作用。我们审查了新物理学的机器学习方法和应用中,在地面高能量物理实验的背景下,包括大型强子撞机,罕见的事件搜索和中微生实验。虽然机器学习在这些领域拥有悠久的历史,但深入学习革命(2010年代初)就研究的范围和雄心而产生了定性转变。这些现代化的机器学习发展是本综述的重点。
translated by 谷歌翻译
Many current approaches to machine learning in particle physics use generic architectures that require large numbers of parameters and disregard underlying physics principles, limiting their applicability as scientific modeling tools. In this work, we present a machine learning architecture that uses a set of inputs maximally reduced with respect to the full 6-dimensional Lorentz symmetry, and is fully permutation-equivariant throughout. We study the application of this network architecture to the standard task of top quark tagging and show that the resulting network outperforms all existing competitors despite much lower model complexity. In addition, we present a Lorentz-covariant variant of the same network applied to a 4-momentum regression task.
translated by 谷歌翻译
在背景主导的情况下,通过机器学习和信号和背景之间的可观察者之间的高度重叠来调查LHC在LHC的新物理搜索的敏感性。我们使用两种不同的型号,XGBoost和深度神经网络,利用可观察到之间的相关性,并将这种方法与传统的切割方法进行比较。我们认为不同的方法来分析模型的输出,发现模板拟合通常比简单的切割更好地执行。通过福芙氏分解,我们可以额外了解事件运动学与机器学习模型输出之间的关系。我们认为具有亚霉素的超对称场景作为一个具体示例,但方法可以应用于更广泛的超对称模型。
translated by 谷歌翻译
我们描述了作为黑暗机器倡议和LES Houches 2019年物理学研讨会进行的数据挑战的结果。挑战的目标是使用无监督机器学习算法检测LHC新物理学的信号。首先,我们提出了如何实现异常分数以在LHC搜索中定义独立于模型的信号区域。我们定义并描述了一个大型基准数据集,由> 10亿美元的Muton-Proton碰撞,其中包含> 10亿美元的模拟LHC事件组成。然后,我们在数据挑战的背景下审查了各种异常检测和密度估计算法,我们在一组现实分析环境中测量了它们的性能。我们绘制了一些有用的结论,可以帮助开发无监督的新物理搜索在LHC的第三次运行期间,并为我们的基准数据集提供用于HTTPS://www.phenomldata.org的未来研究。重现分析的代码在https://github.com/bostdiek/darkmachines-unsupervisedChallenge提供。
translated by 谷歌翻译
Recent developments in the methods of explainable AI (XAI) methods allow researchers to explore the inner workings of deep neural networks (DNNs), revealing crucial information about input-output relationships and realizing how data connects with machine learning models. In this paper we explore interpretability of DNN models designed to identify jets coming from top quark decay in high energy proton-proton collisions at the Large Hadron Collider (LHC). We review a subset of existing top tagger models and explore different quantitative methods to identify which features play the most important roles in identifying the top jets. We also investigate how and why feature importance varies across different XAI metrics, how feature correlations impact their explainability, and how latent space representations encode information as well as correlate with physically meaningful quantities. Our studies uncover some major pitfalls of existing XAI methods and illustrate how they can be overcome to obtain consistent and meaningful interpretation of these models. We additionally illustrate the activity of hidden layers as Neural Activation Pattern (NAP) diagrams and demonstrate how they can be used to understand how DNNs relay information across the layers and how this understanding can help to make such models significantly simpler by allowing effective model reoptimization and hyperparameter tuning. By incorporating observations from the interpretability studies, we obtain state-of-the-art top tagging performance from augmented implementation of existing network
translated by 谷歌翻译
我们介绍了一种从电磁(EM)采样量热计收集的数据重建多个淋浴的第一算法。这种探测器广泛用于高能量物理中,以测量进入粒子的能量和运动学。在这项工作中,我们考虑许多电子通过乳液云室(ECC)砖的情况,启动电子诱导的电磁淋浴,这可以是长曝光时间或大输入粒子通量的情况。例如,船舶实验计划使用乳液检测器进行暗物质搜索和中微子物理调查。船舶实验的预期完整通量约为10 ^ 20颗粒。为了降低与替换ECC砖和离线数据的实验的成本(乳液扫描),决定增加暴露时间。因此,我们希望观察大量重叠阵雨,将EM淋浴重建变为挑战的点云分割问题。我们的重建管线包括图形神经网络,其预测邻接矩阵和聚类算法。我们提出了一种新的层型(乳液CONV),其考虑了ECC砖中淋浴开发的几何特性。对于重叠阵雨的聚类,我们使用修改后的基于分层密度的聚类算法。我们的方法不使用有关进入粒子的任何先前信息,并识别乳液检测器中的高达87%的电磁淋浴。用于重建电磁淋浴的算法的主要测试台将是SND @ LHC。
translated by 谷歌翻译
视觉变压器已成为计算机视觉任务最重要的模型之一。虽然它们以前的工作胜过,但它们需要大量计算资源,这是二次为$ n $的规模。这是传统自我关注(SA)算法的主要缺点。在这里,我们提出了单位力操作的视觉变压器(UFO-VIT),一种具有线性复杂性的新型SA机制。这项工作的主要方法是消除原始SA的非线性。我们将SA机构的矩阵乘法构成而没有复杂的线性近似。通过从原始SA仅修改几行代码,所提出的型号在大多数容量制度上以图像分类和密集的预测任务更优于基于变压器的模型。
translated by 谷歌翻译
The findable, accessible, interoperable, and reusable (FAIR) data principles have provided a framework for examining, evaluating, and improving how we share data with the aim of facilitating scientific discovery. Efforts have been made to generalize these principles to research software and other digital products. Artificial intelligence (AI) models -- algorithms that have been trained on data rather than explicitly programmed -- are an important target for this because of the ever-increasing pace with which AI is transforming scientific and engineering domains. In this paper, we propose a practical definition of FAIR principles for AI models and create a FAIR AI project template that promotes adherence to these principles. We demonstrate how to implement these principles using a concrete example from experimental high energy physics: a graph neural network for identifying Higgs bosons decaying to bottom quarks. We study the robustness of these FAIR AI models and their portability across hardware architectures and software frameworks, and report new insights on the interpretability of AI predictions by studying the interplay between FAIR datasets and AI models. Enabled by publishing FAIR AI models, these studies pave the way toward reliable and automated AI-driven scientific discovery.
translated by 谷歌翻译
In this article, we use artificial intelligence algorithms to show how to enhance the resolution of the elementary particle track fitting in inhomogeneous dense detectors, such as plastic scintillators. We use deep learning to replace more traditional Bayesian filtering methods, drastically improving the reconstruction of the interacting particle kinematics. We show that a specific form of neural network, inherited from the field of natural language processing, is very close to the concept of a Bayesian filter that adopts a hyper-informative prior. Such a paradigm change can influence the design of future particle physics experiments and their data exploitation.
translated by 谷歌翻译
在本文中,我们询问视觉变形金刚(VIT)是否可以作为改善机器学习模型对抗逃避攻击的对抗性鲁棒性的基础结构。尽管较早的作品集中在改善卷积神经网络上,但我们表明VIT也非常适合对抗训练以实现竞争性能。我们使用自定义的对抗训练配方实现了这一目标,该配方是在Imagenet数据集的一部分上使用严格的消融研究发现的。与卷积相比,VIT的规范培训配方建议强大的数据增强,部分是为了补偿注意力模块的视力归纳偏置。我们表明,该食谱在用于对抗训练时可实现次优性能。相比之下,我们发现省略所有重型数据增强,并添加一些额外的零件($ \ varepsilon $ -Warmup和更大的重量衰减),从而大大提高了健壮的Vits的性能。我们表明,我们的配方在完整的Imagenet-1k上概括了不同类别的VIT体系结构和大规模模型。此外,调查了模型鲁棒性的原因,我们表明,在使用我们的食谱时,在训练过程中产生强烈的攻击更加容易,这会在测试时提高鲁棒性。最后,我们通过提出一种量化对抗性扰动的语义性质并强调其与模型的鲁棒性的相关性来进一步研究对抗训练的结果。总体而言,我们建议社区应避免将VIT的规范培训食谱转换为在对抗培训的背景下进行强大的培训和重新思考常见的培训选择。
translated by 谷歌翻译
在大型强子对撞机上大量生产的顶级夸克,具有复杂的探测器签名,需要特殊的重建技术。最常见的衰减模式是“全杰”频道,导致6月份的最终状态,由于可能的排列数量大量,因此在$ pp $碰撞中尤其难以重建。我们使用广义注意机制基于神经网络提出了一种新的问题,我们称之为对称性保留注意力网络(SPA-NET)。我们训练一个这样的网络,以明确地识别每个顶级夸克的衰减产品,而无需组合爆炸作为该技术的力量的一个例子。这种方法大大优于现有的最新方法,正确分配了所有喷气机,以$ 93.0%的价格分配了所有喷气机$ 6 $ -JET,$ 87.8%的$ 7 $ -JET $和$ 82.6%的$ \ geq 8 $ -JET活动。
translated by 谷歌翻译
Here we present a machine learning framework and model implementation that can learn to simulate a wide variety of challenging physical domains, involving fluids, rigid solids, and deformable materials interacting with one another. Our framework-which we term "Graph Network-based Simulators" (GNS)-represents the state of a physical system with particles, expressed as nodes in a graph, and computes dynamics via learned message-passing. Our results show that our model can generalize from single-timestep predictions with thousands of particles during training, to different initial conditions, thousands of timesteps, and at least an order of magnitude more particles at test time. Our model was robust to hyperparameter choices across various evaluation metrics: the main determinants of long-term performance were the number of message-passing steps, and mitigating the accumulation of error by corrupting the training data with noise. Our GNS framework advances the state-of-the-art in learned physical simulation, and holds promise for solving a wide range of complex forward and inverse problems.
translated by 谷歌翻译
Future surveys such as the Legacy Survey of Space and Time (LSST) of the Vera C. Rubin Observatory will observe an order of magnitude more astrophysical transient events than any previous survey before. With this deluge of photometric data, it will be impossible for all such events to be classified by humans alone. Recent efforts have sought to leverage machine learning methods to tackle the challenge of astronomical transient classification, with ever improving success. Transformers are a recently developed deep learning architecture, first proposed for natural language processing, that have shown a great deal of recent success. In this work we develop a new transformer architecture, which uses multi-head self attention at its core, for general multi-variate time-series data. Furthermore, the proposed time-series transformer architecture supports the inclusion of an arbitrary number of additional features, while also offering interpretability. We apply the time-series transformer to the task of photometric classification, minimising the reliance of expert domain knowledge for feature selection, while achieving results comparable to state-of-the-art photometric classification methods. We achieve a logarithmic-loss of 0.507 on imbalanced data in a representative setting using data from the Photometric LSST Astronomical Time-Series Classification Challenge (PLAsTiCC). Moreover, we achieve a micro-averaged receiver operating characteristic area under curve of 0.98 and micro-averaged precision-recall area under curve of 0.87.
translated by 谷歌翻译
点云学习界见证了从CNN到变形金刚的模型转移,纯变压器架构在主要学习基准上实现了最高精度。然而,现有的点变压器是计算昂贵的,因为它们需要产生大的注意图,其相对于输入大小具有二次复杂度(空间和时间)。为了解决这种缺点,我们介绍补丁注意(PAT),以便自适应地学习计算注意力地图的更小的基础。通过对这些基础的加权求和,PAT仅捕获全局形状上下文,而且还可以实现输入大小的线性复杂性。此外,我们提出了一种轻量级的多尺度关注(MST)块来构建不同尺度特征的关注,提供具有多尺度特征的模型。我们配备了PAT和MST,我们构建了我们的神经结构,称为PatchFormer,将两个模块集成到Point云学习的联合框架中。广泛的实验表明,我们的网络对一般点云学习任务的可比准确性具有9.2倍的速度高于先前的点变压器。
translated by 谷歌翻译