The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
Single-cell technologies are revolutionizing the entire field of biology. The large volumes of data generated by single-cell technologies are high-dimensional, sparse, heterogeneous, and have complicated dependency structures, making analyses using conventional machine learning approaches challenging and impractical. In tackling these challenges, deep learning often demonstrates superior performance compared to traditional machine learning methods. In this work, we give a comprehensive survey on deep learning in single-cell analysis. We first introduce background on single-cell technologies and their development, as well as fundamental concepts of deep learning including the most popular deep architectures. We present an overview of the single-cell analytic pipeline pursued in research applications while noting divergences due to data sources or specific applications. We then review seven popular tasks spanning through different stages of the single-cell analysis pipeline, including multimodal integration, imputation, clustering, spatial domain identification, cell-type deconvolution, cell segmentation, and cell-type annotation. Under each task, we describe the most recent developments in classical and deep learning methods and discuss their advantages and disadvantages. Deep learning tools and benchmark datasets are also summarized for each task. Finally, we discuss the future directions and the most recent challenges. This survey will serve as a reference for biologists and computer scientists, encouraging collaborations.
translated by 谷歌翻译
通常,通过解决轨迹优化问题并使用跟踪控制器来执行轨迹,通常在四足机器人上实现了专业运动。这种方法与通常通过在线重新计划控制常规步态的模型预测控制(MPC)策略平行。在这项工作中,我们提出了一种非线性MPC(NMPC)技术,该技术可以在统一框架内自然地重新计划专门运动技能和常规运动。 NMPC有关混合动力学模型的原因,并使用约束差分动态编程(DDP)求解器的变体来解决。拟议的NMPC使机器人能够发挥各种敏捷技能,例如跳跃,边界和小跑,以及这些技能之间的快速过渡。我们通过三个具有挑战性的运动序列评估了提出的算法,这些算法将多个敏捷技能结合在两个四倍的平台,即Unitree A1和MIT Mini Cheetah上,显示了其有效性和通用性。
translated by 谷歌翻译
强化学习(RL)见证了四足动物的大步进展,在可靠的SIM转移到现实的政策转移方面持续进展。但是,重用另一个机器人的政策仍然是一个挑战,这可以节省重新培训的时间。在这项工作中,我们提出了一个用于零射击政策重新定位的框架,其中可以在不同形状和尺寸的机器人之间转移多种运动技能。新框架以系统整合RL和模型预测控制(MPC)的计划和控制管道为中心。计划阶段采用RL来生成动态合理的轨迹以及联系时间表,避免了接触序列优化的组合复杂性。然后,将这些信息用于播种MPC,以通过新的混合运动动力学(HKD)模型稳定和鲁棒性地推出策略,该模型隐含地优化了立足点位置。硬件结果表明能够将政策从A1和Laikago机器人转移到MIT MIT MINI CHEETAH机器人,而无需重新调整政策。
translated by 谷歌翻译
大型语言模型在各种任务上显示出令人印象深刻的几次结果。但是,当知识是此类结果的关键时,就像问题回答和事实检查之类的任务一样,似乎需要存储知识的大量参数计数。众所周知,检索增强模型可以在不需要多个参数的情况下在知识密集的任务上表现出色,但是目前尚不清楚它们是否在几个弹药设置中工作。在这项工作中,我们介绍了地图集,这是一个经过精心设计和预先训练的增强语言模型,能够通过很少的培训示例学习知识密集型任务。我们对包括MMLU,苏格兰短裙和归类等各种任务进行评估,并研究文档索引内容的影响,表明它可以很容易地进行更新。值得注意的是,在自然问题上仅使用64个示例在自然问题上达到超过42 \%的准确性,尽管参数少了50倍,但比540B参数模型的表现优于540b参数模型。
translated by 谷歌翻译
最近,已经观察到,转移学习解决方案可能是我们解决许多少量学习基准的全部 - 因此提出了有关何时以及如何部署元学习算法的重要问题。在本文中,我们试图通过1.提出一个新颖的指标(多样性系数)来阐明这些问题,以测量几次学习基准和2.的任务多样性。 )并在公平条件下进行学习(相同的体系结构,相同的优化器和所有经过培训的模型)。使用多样性系数,我们表明流行的迷你胶原和Cifar-fs几乎没有学习基准的多样性低。这种新颖的洞察力将转移学习解决方案比在公平比较的低多样性方面的元学习解决方案更好。具体而言,我们从经验上发现,低多样性系数与转移学习和MAML学习解决方案之间的高相似性在元测试时间和分类层相似性方面(使用基于特征的距离指标,例如SVCCA,PWCCA,CKA和OPD) )。为了进一步支持我们的主张,我们发现这种元测试的准确性仍然存在,即使模型大小变化也是如此。因此,我们得出的结论是,在低多样性制度中,MAML和转移学习在公平比较时具有等效的元检验性能。我们也希望我们的工作激发了对元学习基准测试基准的更周到的结构和定量评估。
translated by 谷歌翻译
尽管更多的层和更多的参数通常提高了模型的准确性,但是这样的大型模型通常具有较高的计算复杂性,并且需要大记忆,这超过了小型设备进行推理的容量,并且会产生长时间的训练时间。此外,即使在高性能服务器中,也很难负担长期训练时间和大型模型的推理时间。作为将大型深层模型(教师模型)压缩为紧凑模型(学生模型)的有效方法,知识蒸馏是一种与大型模型打交道的有前途的方法。现有的知识蒸馏方法无法利用可用的弹性计算资源,并对应于低效率。在本文中,我们提出了一个用于知识蒸馏的弹性深度学习框架,即EDL-DIST。 EDL-DIST的优势是三倍。首先,推论和训练过程是分开的。其次,可以利用弹性可用的计算资源来提高效率。第三,支持训练和推理过程的故障耐受性。我们进行了广泛的实验,以表明EDL-DIST的吞吐量比基线方法(在线知识蒸馏)快3.125倍,而精度相似或更高。
translated by 谷歌翻译
例如,近似K-Nearest的邻居搜索(AKNNS)现在已经在现代应用程序中变得无处不在,例如,作为一个快速搜索程序,具有两个塔式深度学习模型。特别是基于图的AKNN方法,由于其出色的性能,因此受到了极大的关注。这些方法依靠贪婪的图形搜索来遍历数据库中的载体。在这种贪婪的搜索方案下,我们进行了一个关键的观察:许多距离计算不会影响搜索更新,因此可以在不损害性能的情况下近似这些计算。结果,我们提出了手指,这是一种快速的推理方法,以实现有效的图形搜索。手指通过估计较低碱基和分布匹配的相邻残留向量之间的角度来近似距离函数。近似距离可用于绕过不必要的计算,从而导致更快的搜索。从经验上讲,在不同的基准数据集中加速了一种名为HNSW的流行基于图形的方法,其名称为HNSW的HNSW方法可超过现有的基于图的方法20%-60%。
translated by 谷歌翻译
贝叶斯优化(BO)已成为黑框函数的顺序优化。当BO用于优化目标函数时,我们通常可以访问对潜在相关功能的先前评估。这就提出了一个问题,即我们是否可以通过元学习(meta-bo)来利用这些先前的经验来加速当前的BO任务,同时确保稳健性抵抗可能破坏BO融合的潜在有害的不同任务。本文介绍了两种可扩展且可证明的稳健元算法:稳健的元高斯过程 - 加工置信度结合(RM-GP-UCB)和RM-GP-thompson采样(RM-GP-TS)。我们证明,即使某些或所有以前的任务与当前的任务不同,这两种算法在渐近上都是无重组的,并且证明RM-GP-UCB比RM-GP-TS具有更好的理论鲁棒性。我们还利用理论保证,通过通过在线学习最大程度地减少遗憾,优化分配给各个任务的权重,从而减少了相似任务的影响,从而进一步增强了稳健性。经验评估表明,(a)RM-GP-UCB在各种应用程序中都有效,一致地性能,(b)RM-GP-TS,尽管在理论上和实践中都比RM-GP-ucb稳健,但在实践中,在竞争性中表现出色某些方案具有较小的任务,并且在计算上更有效。
translated by 谷歌翻译