In this book chapter, we briefly describe the main components that constitute the gradient descent method and its accelerated and stochastic variants. We aim at explaining these components from a mathematical point of view, including theoretical and practical aspects, but at an elementary level. We will focus on basic variants of the gradient descent method and then extend our view to recent variants, especially variance-reduced stochastic gradient schemes (SGD). Our approach relies on revealing the structures presented inside the problem and the assumptions imposed on the objective function. Our convergence analysis unifies several known results and relies on a general, but elementary recursive expression. We have illustrated this analysis on several common schemes.
translated by 谷歌翻译
在本文中,我们开发了一种新型加速算法,以解决一些最大单调方程以及单调夹杂物。我们的方法而不是使用Nesterov的加速方法,而是依赖于[32]中所谓的Halpern型固定点迭代,最近由许多研究人员利用,包括[24,70]。首先,我们基于Popov过去的超梯度方法来解决[70]中的锚定梯度方案的新变种,以解决最大单调方程$ g(x)= 0 $。我们表明我们的方法与运营商规范$ \ vert g(x_k)\ vert上的锚定梯度算法相同$,但只需要在每次迭代的每次迭代时进行一次评估,其中$ k $是迭代计数器。接下来,我们开发两个分割算法,以近似两个最大单调的运算符之和的零点。第一算法源自与分裂技术组合的锚定梯度方法,而第二个是其波波夫的变体,其可以降低偏移复杂度。这两种算法似乎都是新的,可以被视为Douglas-Rachford(DR)分裂方法的加速变体。他们均达到$ \ mathcal {o}(1 / k)$ rations上的正常r_ {\ gamma}(x_k)\ vert $ g _ {\ gamma}(\ cdot) $与问题相关联。我们还提出了一个新的加速Douglas-Rachford分裂方案,用于解决这个问题,该问题在$ \ vert g _ {\ gamma}(x_k)\ vert $下的$ \ mathcal {o}(1 / k)$收敛率下面只有最大单调假设。最后,我们指定了我们的第一算法来解决凸凹minimax问题,并应用我们加速的DR方案来得出乘法器(ADMM)的交替方向方法的新变型。
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Although scaling language models improves performance on a range of tasks, there are apparently some scenarios where scaling hurts performance. For instance, the Inverse Scaling Prize Round 1 identified four ''inverse scaling'' tasks, for which performance gets worse for larger models. These tasks were evaluated on models of up to 280B parameters, trained up to 500 zettaFLOPs of compute. This paper takes a closer look at these four tasks. We evaluate models of up to 540B parameters, trained on five times more compute than those evaluated in the Inverse Scaling Prize. With this increased range of model sizes and training compute, three out of the four tasks exhibit what we call ''U-shaped scaling'' -- performance decreases up to a certain model size, and then increases again up to the largest model evaluated. One hypothesis is that U-shaped scaling occurs when a task comprises a ''true task'' and a ''distractor task''. Medium-size models can do the distractor task, which hurts performance, while only large-enough models can ignore the distractor task and do the true task. The existence of U-shaped scaling implies that inverse scaling may not hold for larger models. Second, we evaluate the inverse scaling tasks using chain-of-thought (CoT) prompting, in addition to basic prompting without CoT. With CoT prompting, all four tasks show either U-shaped scaling or positive scaling, achieving perfect solve rates on two tasks and several sub-tasks. This suggests that the term "inverse scaling task" is under-specified -- a given task may be inverse scaling for one prompt but positive or U-shaped scaling for a different prompt.
translated by 谷歌翻译
Graph neural networks (GNNs) have demonstrated excellent performance in a wide range of applications. However, the enormous size of large-scale graphs hinders their applications under real-time inference scenarios. Although existing scalable GNNs leverage linear propagation to preprocess the features and accelerate the training and inference procedure, these methods still suffer from scalability issues when making inferences on unseen nodes, as the feature preprocessing requires the graph is known and fixed. To speed up the inference in the inductive setting, we propose a novel adaptive propagation order approach that generates the personalized propagation order for each node based on its topological information. This could successfully avoid the redundant computation of feature propagation. Moreover, the trade-off between accuracy and inference latency can be flexibly controlled by simple hyper-parameters to match different latency constraints of application scenarios. To compensate for the potential inference accuracy loss, we further propose Inception Distillation to exploit the multi scale reception information and improve the inference performance. Extensive experiments are conducted on four public datasets with different scales and characteristics, and the experimental results show that our proposed inference acceleration framework outperforms the SOTA graph inference acceleration baselines in terms of both accuracy and efficiency. In particular, the advantage of our proposed method is more significant on larger-scale datasets, and our framework achieves $75\times$ inference speedup on the largest Ogbn-products dataset.
translated by 谷歌翻译
Finetuning language models on a collection of datasets phrased as instructions has been shown to improve model performance and generalization to unseen tasks. In this paper we explore instruction finetuning with a particular focus on (1) scaling the number of tasks, (2) scaling the model size, and (3) finetuning on chain-of-thought data. We find that instruction finetuning with the above aspects dramatically improves performance on a variety of model classes (PaLM, T5, U-PaLM), prompting setups (zero-shot, few-shot, CoT), and evaluation benchmarks (MMLU, BBH, TyDiQA, MGSM, open-ended generation). For instance, Flan-PaLM 540B instruction-finetuned on 1.8K tasks outperforms PALM 540B by a large margin (+9.4% on average). Flan-PaLM 540B achieves state-of-the-art performance on several benchmarks, such as 75.2% on five-shot MMLU. We also publicly release Flan-T5 checkpoints, which achieve strong few-shot performance even compared to much larger models, such as PaLM 62B. Overall, instruction finetuning is a general method for improving the performance and usability of pretrained language models.
translated by 谷歌翻译
我们介绍了第一项经验研究,研究了突发性检测对意向检测和插槽填充的下游任务的影响。我们对越南人进行了这项研究,这是一种低资源语言,没有以前的研究,也没有公共数据集可用于探索。首先,我们通过手动添加上下文不满并注释它们来扩展流利的越南意图检测和插槽填充phoatis。然后,我们使用强基线进行实验进行实验,以基于预训练的语言模型,以检测和关节意图检测和插槽填充。我们发现:(i)爆发对下游意图检测和插槽填充任务的性能产生负面影响,并且(ii)在探索环境中,预先训练的多语言语言模型XLM-R有助于产生更好的意图检测和插槽比预先训练的单语言模型phobert填充表演,这与在流利性环境中通常发现的相反。
translated by 谷歌翻译
跟踪球员和团队运动中的球是分析表现或增强游戏体验的关键。当这些数据的唯一来源是广播视频时,需要运动场注册系统来估算同型并重新投影球或从图像空间到场地的球员。本文描述了在MMSPorts 2022 Camera Callibration Challenge的背景下,一个新的篮球法庭注册框架。该方法基于通过用透视感知约束采样的关键点的位置的编码器编码网络的估计。篮子位置的回归和重型数据增强技术使该模型稳健地对不同的领域。消融研究表明,我们的贡献对挑战测试集的积极影响。与挑战基线相比,我们的方法将平方误差除以4.7。
translated by 谷歌翻译
数十年来,计算机系统持有大量个人数据。一方面,这种数据丰度允许在人工智能(AI),尤其是机器学习(ML)模型中突破。另一方面,它可能威胁用户的隐私并削弱人类与人工智能之间的信任。最近的法规要求,可以从一般情况下从计算机系统中删除有关用户的私人信息,特别是根据要求从ML模型中删除(例如,“被遗忘的权利”)。虽然从后端数据库中删除数据应该很简单,但在AI上下文中,它不够,因为ML模型经常“记住”旧数据。现有的对抗攻击证明,我们可以从训练有素的模型中学习私人会员或培训数据的属性。这种现象要求采用新的范式,即机器学习,以使ML模型忘记了特定的数据。事实证明,由于缺乏共同的框架和资源,最近在机器上学习的工作无法完全解决问题。在本调查文件中,我们试图在其定义,场景,机制和应用中对机器进行彻底的研究。具体而言,作为最先进的研究的类别集合,我们希望为那些寻求机器未学习的入门及其各种表述,设计要求,删除请求,算法和用途的人提供广泛的参考。 ML申请。此外,我们希望概述范式中的关键发现和趋势,并突出显示尚未看到机器无法使用的新研究领域,但仍可以受益匪浅。我们希望这项调查为ML研究人员以及寻求创新隐私技术的研究人员提供宝贵的参考。我们的资源是在https://github.com/tamlhp/awesome-machine-unlearning上。
translated by 谷歌翻译
基于量子的通信中的当前技术将量子数据的新集成与经典数据进行混合处理。但是,这些技术的框架仅限于单个经典或量子任务,这限制了它们在近期应用中的灵活性。我们建议在需要经典和量子输入的计算任务中利用量子储存器处理器来利用量子动力学。该模拟处理器包括一个量子点网络,其中量子数据被入射到网络中,并且经典数据通过一个连贯的字段刺激了网络进行编码。我们执行量子断层扫描和经典通道非线性均衡的多任务应用。有趣的是,可以通过对经典数据的反馈控制以闭环方式进行断层扫描。因此,如果经典输入来自动力学系统,则将该系统嵌入封闭环中,即使访问对外部经典输入的访问被中断也可以处理混合处理。最后,我们证明准备量子去极化通道是一种用于量子数据处理的新型量子机学习技术。
translated by 谷歌翻译