在人类神经科学中,机器学习可以帮助揭示与受试者行为相关的较低维度的神经表现。但是,最新的模型通常需要大型数据集进行训练,因此容易过度拟合人类神经影像学数据,这些数据通常只有很少的样本但很多输入尺寸。在这里,我们利用了这样一个事实,即我们在人类神经科学中寻求的特征恰恰是与受试者行为相关的事实。因此,我们通过分类器增强(Trace)开发了与任务相关的自动编码器,并测试了其与两个严重截断的机器学习数据集的标准自动编码器相比,它提取与行为相关的可分离表示的能力。然后,我们在fMRI数据上评估了两个模型,受试者观察到动物和物体。 Trace几乎单方面优于自动编码器和原始输入,在发现“清洁剂”,与任务相关的表示方面最多提高了分类准确性,并提高了三倍。这些结果展示了Trace获得与人类行为有关的各种数据的潜力。
translated by 谷歌翻译
用于解决具有量化消息传递的实际边缘计算系统中的一般机器学习(ML)问题的联邦学习(FL)算法的最佳设计仍然是一个打开问题。本文考虑了服务器和工人在发送消息之前具有不同的计算和通信能力以及使用量化的优势计算系统。为了探讨这种优势计算系统中的FL的全部潜力,我们首先介绍一般的FL算法,即GenQSGD,由全局和局部迭代,迷你批量大小和步骤尺寸序列参数化。然后,我们分析其对任意步长序列的融合,并指定三个常用的步大规则下的收敛结果,即常数,指数和递减的步长规则。接下来,我们优化算法参数,以最小化时间约束和收敛误差约束下的能量成本,重点是FL的整体实施过程。具体地,对于在每个考虑的步长规则下的任何给定的步骤尺寸序列,我们优化全局和本地迭代和迷你批量大小的数量,以最佳地实现具有预设步长序列的应用程序的FL。我们还优化了步骤序列以及这些算法参数,以探索FL的全部潜力。由此产生的优化问题是具有非可分性约束函数的非凸面问题。我们提出了使用通用内近似(GIA)的迭代算法来获得KKT点和用于解决互补几何编程(CGP)的技巧。最后,我们用现有的FL算法用优化的算法参数进行了数值展示了GenQSGD的显着收益,并揭示了最佳地设计了一般FL算法的重要性。
translated by 谷歌翻译
用于联合学习(FL)的最佳算法设计仍然是一个打开的问题。本文探讨了实用边缘计算系统中FL的全部潜力,其中工人可能具有不同的计算和通信功能,并且在服务器和工人之间发送量化的中间模型更新。首先,我们介绍了FL,即GenQSGD的一般量化并行迷你批量随机梯度下降(SGD)算法,即GenQSGD,其由全球迭代的数量参数化,所有工人的本地迭代的数量以及迷你批量大小。我们还分析了其算法参数的任何选择的收敛误差。然后,我们优化算法参数,以最小化时间约束和收敛误差约束下的能量成本。优化问题是具有非可分辨率约束函数的具有挑战性的非凸面问题。我们提出了一种迭代算法,可以使用高级优化技术获得KKT点。数值结果证明了现有的GenQSGD的显着增益,并揭示了最佳设计的重要性FL算法。
translated by 谷歌翻译
字典学习是在信号处理和机器学习中使用的广泛使用无监督的学习方法。最现有的字典学习作品处于离线方式。主要有两种离线如何用于字典学习。一个是做词典和稀疏代码的替代优化;另一种方法是通过在正交组上限制它来优化字典。后者被称为正交词典学习,其具有较低的复杂性实现,因此,对于低音设备更有利。但是,在正交词典学习的现有方案仅使用批处理数据,不能在线实现,这不适用于实时应用程序。本文提出了一种新的在线正交词典方案,以动态地学习字典免于流数据而不存储历史数据。该拟议方案包括一种新的问题配方和具有收敛分析的有效的在线算法设计。在问题制定中,我们放宽正交约束,以实现有效的在线算法。在算法设计中,我们提出了一种新的弗兰克 - 沃尔夫的在线算法,具有o的收敛速度(ln t / t ^(1/4))。还导出了关键系统参数的收敛速率。具有合成数据和现实世界传感器读数的实验证明了所提出的在线正交词典学习方案的有效性和效率。
translated by 谷歌翻译
New architecture GPUs like A100 are now equipped with multi-instance GPU (MIG) technology, which allows the GPU to be partitioned into multiple small, isolated instances. This technology provides more flexibility for users to support both deep learning training and inference workloads, but efficiently utilizing it can still be challenging. The vision of this paper is to provide a more comprehensive and practical benchmark study for MIG in order to eliminate the need for tedious manual benchmarking and tuning efforts. To achieve this vision, the paper presents MIGPerf, an open-source tool that streamlines the benchmark study for MIG. Using MIGPerf, the authors conduct a series of experiments, including deep learning training and inference characterization on MIG, GPU sharing characterization, and framework compatibility with MIG. The results of these experiments provide new insights and guidance for users to effectively employ MIG, and lay the foundation for further research on the orchestration of hybrid training and inference workloads on MIGs. The code and results are released on https://github.com/MLSysOps/MIGProfiler. This work is still in progress and more results will be published soon.
translated by 谷歌翻译
Charisma is considered as one's ability to attract and potentially also influence others. Clearly, there can be considerable interest from an artificial intelligence's (AI) perspective to provide it with such skill. Beyond, a plethora of use cases opens up for computational measurement of human charisma, such as for tutoring humans in the acquisition of charisma, mediating human-to-human conversation, or identifying charismatic individuals in big social data. A number of models exist that base charisma on various dimensions, often following the idea that charisma is given if someone could and would help others. Examples include influence (could help) and affability (would help) in scientific studies or power (could help), presence, and warmth (both would help) as a popular concept. Modelling high levels in these dimensions for humanoid robots or virtual agents, seems accomplishable. Beyond, also automatic measurement appears quite feasible with the recent advances in the related fields of Affective Computing and Social Signal Processing. Here, we, thereforem present a blueprint for building machines that can appear charismatic, but also analyse the charisma of others. To this end, we first provide the psychological perspective including different models of charisma and behavioural cues of it. We then switch to conversational charisma in spoken language as an exemplary modality that is essential for human-human and human-computer conversations. The computational perspective then deals with the recognition and generation of charismatic behaviour by AI. This includes an overview of the state of play in the field and the aforementioned blueprint. We then name exemplary use cases of computational charismatic skills before switching to ethical aspects and concluding this overview and perspective on building charisma-enabled AI.
translated by 谷歌翻译
There are two important things in science: (A) Finding answers to given questions, and (B) Coming up with good questions. Our artificial scientists not only learn to answer given questions, but also continually invent new questions, by proposing hypotheses to be verified or falsified through potentially complex and time-consuming experiments, including thought experiments akin to those of mathematicians. While an artificial scientist expands its knowledge, it remains biased towards the simplest, least costly experiments that still have surprising outcomes, until they become boring. We present an empirical analysis of the automatic generation of interesting experiments. In the first setting, we investigate self-invented experiments in a reinforcement-providing environment and show that they lead to effective exploration. In the second setting, pure thought experiments are implemented as the weights of recurrent neural networks generated by a neural experiment generator. Initially interesting thought experiments may become boring over time.
translated by 谷歌翻译
Recent advances in deep learning have enabled us to address the curse of dimensionality (COD) by solving problems in higher dimensions. A subset of such approaches of addressing the COD has led us to solving high-dimensional PDEs. This has resulted in opening doors to solving a variety of real-world problems ranging from mathematical finance to stochastic control for industrial applications. Although feasible, these deep learning methods are still constrained by training time and memory. Tackling these shortcomings, Tensor Neural Networks (TNN) demonstrate that they can provide significant parameter savings while attaining the same accuracy as compared to the classical Dense Neural Network (DNN). In addition, we also show how TNN can be trained faster than DNN for the same accuracy. Besides TNN, we also introduce Tensor Network Initializer (TNN Init), a weight initialization scheme that leads to faster convergence with smaller variance for an equivalent parameter count as compared to a DNN. We benchmark TNN and TNN Init by applying them to solve the parabolic PDE associated with the Heston model, which is widely used in financial pricing theory.
translated by 谷歌翻译
A statistical ensemble of neural networks can be described in terms of a quantum field theory (NN-QFT correspondence). The infinite-width limit is mapped to a free field theory, while finite N corrections are mapped to interactions. After reviewing the correspondence, we will describe how to implement renormalization in this context and discuss preliminary numerical results for translation-invariant kernels. A major outcome is that changing the standard deviation of the neural network weight distribution corresponds to a renormalization flow in the space of networks.
translated by 谷歌翻译
We present an automatic method for annotating images of indoor scenes with the CAD models of the objects by relying on RGB-D scans. Through a visual evaluation by 3D experts, we show that our method retrieves annotations that are at least as accurate as manual annotations, and can thus be used as ground truth without the burden of manually annotating 3D data. We do this using an analysis-by-synthesis approach, which compares renderings of the CAD models with the captured scene. We introduce a 'cloning procedure' that identifies objects that have the same geometry, to annotate these objects with the same CAD models. This allows us to obtain complete annotations for the ScanNet dataset and the recent ARKitScenes dataset.
translated by 谷歌翻译