深度神经网络(DNN)如此过度参数化,最近的研究发现它们已经在随机初始化状态下具有高精度的子网。找到这些子网是一种可行的替代培训方法,可以重量学习。并行地,另一行工作已经假设了深度残差网络(Resnet)正在尝试近似浅反复性神经网络(RNN)的行为,并且已经提出了一种将它们压缩成复发模型的方法。本文提出将这些研究融合成高度压缩但准确的模型:隐藏网络(HFN)。通过将reset折叠成反复化结构,然后搜索隐藏在随机初始化模型内的准确子网,获得了高性能的尚未更新的HFN而不更新权重。因此,HFN在CIFAR100上归因于RESET50的等效性能,同时占据38.5倍较少的内存,以及在ImageNet上的类似性能,内存大小为26.8x。当在高度量化和随机加权的DNN推理加速器上运行时保持准确时,HFN将变得更具吸引力。在https://github.com/lopez-angel/hidden-fold-networks提供的代码
translated by 谷歌翻译
In recent years, reinforcement learning (RL) has become increasingly successful in its application to science and the process of scientific discovery in general. However, while RL algorithms learn to solve increasingly complex problems, interpreting the solutions they provide becomes ever more challenging. In this work, we gain insights into an RL agent's learned behavior through a post-hoc analysis based on sequence mining and clustering. Specifically, frequent and compact subroutines, used by the agent to solve a given task, are distilled as gadgets and then grouped by various metrics. This process of gadget discovery develops in three stages: First, we use an RL agent to generate data, then, we employ a mining algorithm to extract gadgets and finally, the obtained gadgets are grouped by a density-based clustering algorithm. We demonstrate our method by applying it to two quantum-inspired RL environments. First, we consider simulated quantum optics experiments for the design of high-dimensional multipartite entangled states where the algorithm finds gadgets that correspond to modern interferometer setups. Second, we consider a circuit-based quantum computing environment where the algorithm discovers various gadgets for quantum information processing, such as quantum teleportation. This approach for analyzing the policy of a learned agent is agent and environment agnostic and can yield interesting insights into any agent's policy.
translated by 谷歌翻译
Deep learning models have shown promising results in recognizing depressive states using video-based facial expressions. While successful models typically leverage using 3D-CNNs or video distillation techniques, the different use of pretraining, data augmentation, preprocessing, and optimization techniques across experiments makes it difficult to make fair architectural comparisons. We propose instead to enhance two simple models based on ResNet-50 that use only static spatial information by using two specific face alignment methods and improved data augmentation, optimization, and scheduling techniques. Our extensive experiments on benchmark datasets obtain similar results to sophisticated spatio-temporal models for single streams, while the score-level fusion of two different streams outperforms state-of-the-art methods. Our findings suggest that specific modifications in the preprocessing and training process result in noticeable differences in the performance of the models and could hide the actual originally attributed to the use of different neural network architectures.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
Federated learning (FL) has been proposed as a privacy-preserving approach in distributed machine learning. A federated learning architecture consists of a central server and a number of clients that have access to private, potentially sensitive data. Clients are able to keep their data in their local machines and only share their locally trained model's parameters with a central server that manages the collaborative learning process. FL has delivered promising results in real-life scenarios, such as healthcare, energy, and finance. However, when the number of participating clients is large, the overhead of managing the clients slows down the learning. Thus, client selection has been introduced as a strategy to limit the number of communicating parties at every step of the process. Since the early na\"{i}ve random selection of clients, several client selection methods have been proposed in the literature. Unfortunately, given that this is an emergent field, there is a lack of a taxonomy of client selection methods, making it hard to compare approaches. In this paper, we propose a taxonomy of client selection in Federated Learning that enables us to shed light on current progress in the field and identify potential areas of future research in this promising area of machine learning.
translated by 谷歌翻译
我们研究不同损失功能对医学图像病变细分的影响。尽管在处理自然图像时,跨凝结(CE)损失是最受欢迎的选择,但对于生物医学图像分割,由于其处理不平衡的情况,软骰子损失通常是首选的。另一方面,这两个功能的组合也已成功地应用于此类任务中。一个较少研究的问题是在存在分布(OOD)数据的情况下所有这些损失的概括能力。这是指在测试时间出现的样本,这些样本是从与训练图像不同的分布中得出的。在我们的情况下,我们将模型训练在始终包含病变的图像上,但是在测试时间我们也有无病变样品。我们通过全面的实验对内窥镜图像和糖尿病脚图像的溃疡分割进行了全面的实验,分析了不同损失函数对分布性能的最小化对分布性能的影响。我们的发现令人惊讶:在处理OOD数据时,CE-DICE损失组合在分割分配图像中表现出色,这使我们建议通过这种问题采用CE损失,因为它的稳健性和能够概括为OOD样品。可以在\ url {https://github.com/agaldran/lesion_losses_ood}找到与我们实验相关的代码。
translated by 谷歌翻译
嗜睡是驾驶员和交通事故主要原因之一的主要关注点。认知神经科学和计算机科学的进步已通过使用脑部计算机界面(BCIS)和机器学习(ML)来检测驾驶员的嗜睡。然而,几个挑战仍然开放,应该面对。首先,文献中缺少使用一组ML算法的多种ML算法对嗜睡检测性能的全面评估。最后,需要研究适合受试者组的可扩展ML模型的检测性能,并将其与文献中提出的单个模型进行比较。为了改善这些局限性,这项工作提出了一个智能框架,该框架采用了BCIS和基于脑电图(EEG)的功能,以检测驾驶场景中的嗜睡。 SEED-VIG数据集用于喂食不同的ML回归器和三类分类器,然后评估,分析和比较单个受试者和组的表现最佳模型。有关单个模型的更多详细信息,随机森林(RF)获得了78%的F1分数,改善了通过文献中使用的模型(例如支持向量机(SVM))获得的58%。关于可扩展模型,RF达到了79%的F1得分,证明了这些方法的有效性。所学的经验教训可以总结如下:i)不仅SVM,而且文献中未充分探索的其他模型与嗜睡检测有关,ii)ii)适用于受试者组的可伸缩方法也有效地检测嗜睡,即使新受试者也是如此评估模型培训中未包括的。
translated by 谷歌翻译
社会互动网络是建立文明的基材。通常,我们与我们喜欢的人建立新的纽带,或者认为通过第三方的干预,我们的关系损害了。尽管它们的重要性和这些过程对我们的生活产生的巨大影响,但对它们的定量科学理解仍处于起步阶段,这主要是由于很难收集大量的社交网络数据集,包括个人属性。在这项工作中,我们对13所学校的真实社交网络进行了彻底的研究,其中3,000多名学生和60,000名宣布正面关系和负面关系,包括对所有学生的个人特征的测试。我们引入了一个度量标准 - “三合会影响”,该指标衡量了最近的邻居在其接触关系中的影响。我们使用神经网络来预测关系,并根据他们的个人属性或三合会的影响来提取两个学生是朋友或敌人的可能性。或者,我们可以使用网络结构的高维嵌入来预测关系。值得注意的是,三合会影响(一个简单的一维度量)在预测两个学生之间的关系方面达到了最高的准确性。我们假设从神经网络中提取的概率 - 三合会影响的功能和学生的个性 - 控制真实社交网络的演变,为这些系统的定量研究开辟了新的途径。
translated by 谷歌翻译
ICECUBE是一种用于检测1 GEV和1 PEV之间大气和天体中微子的光学传感器的立方公斤阵列,该阵列已部署1.45 km至2.45 km的南极的冰盖表面以下1.45 km至2.45 km。来自ICE探测器的事件的分类和重建在ICeCube数据分析中起着核心作用。重建和分类事件是一个挑战,这是由于探测器的几何形状,不均匀的散射和冰中光的吸收,并且低于100 GEV的光,每个事件产生的信号光子数量相对较少。为了应对这一挑战,可以将ICECUBE事件表示为点云图形,并将图形神经网络(GNN)作为分类和重建方法。 GNN能够将中微子事件与宇宙射线背景区分开,对不同的中微子事件类型进行分类,并重建沉积的能量,方向和相互作用顶点。基于仿真,我们提供了1-100 GEV能量范围的比较与当前ICECUBE分析中使用的当前最新最大似然技术,包括已知系统不确定性的影响。对于中微子事件分类,与当前的IceCube方法相比,GNN以固定的假阳性速率(FPR)提高了信号效率的18%。另外,GNN在固定信号效率下将FPR的降低超过8(低于半百分比)。对于能源,方向和相互作用顶点的重建,与当前最大似然技术相比,分辨率平均提高了13%-20%。当在GPU上运行时,GNN能够以几乎是2.7 kHz的中位数ICECUBE触发速率的速率处理ICECUBE事件,这打开了在在线搜索瞬态事件中使用低能量中微子的可能性。
translated by 谷歌翻译
生成对抗网络(GAN)是强大的机器学习模型,能够生成具有高分辨率的所需现象的完全合成样本。尽管他们成功了,但GAN的训练过程非常不稳定,通常有必要对网络实施几种附属启发式方法,以达到模型的可接受收敛。在本文中,我们介绍了一种新颖的方法来分析生成对抗网络培训的收敛性和稳定性。为此,我们建议分解对手Min-Max游戏的目标功能,将定期gan定义为傅立叶系列。通过研究连续交替梯度下降算法的截短傅里叶序列的动力学,我们能够近似实际流量并确定GAN收敛的主要特征。通过研究$ 2 $ - 参数gan的旨在产生未知指数分布的训练流,从经验上证实了这种方法。作为副产品,我们表明gan中的融合轨道是周期性轨道的小扰动,因此纳什均值是螺旋吸引子。从理论上讲,这证明了在甘斯中观察到的缓慢和不稳定的训练。
translated by 谷歌翻译