In this paper, we introduce a novel variation of model-agnostic meta-learning, where an extra multiplicative parameter is introduced in the inner-loop adaptation. Our variation creates a shortcut in the parameter space for the inner-loop adaptation and increases model expressivity in a highly controllable manner. We show both theoretically and numerically that our variation alleviates the problem of conflicting gradients and improves training dynamics. We conduct experiments on 3 distinctive problems, including a toy classification problem for threshold comparison, a regression problem for wavelet transform, and a classification problem on MNIST. We also discuss ways to generalize our method to a broader class of problems.
translated by 谷歌翻译
This paper proposes a new method, OFA-OCR, to transfer multimodal pretrained models to text recognition. Specifically, we recast text recognition as image captioning and directly transfer a unified vision-language pretrained model to the end task. Without pretraining on large-scale annotated or synthetic text recognition data, OFA-OCR outperforms the baselines and achieves state-of-the-art performance in the Chinese text recognition benchmark. Additionally, we construct an OCR pipeline with OFA-OCR, and we demonstrate that it can achieve competitive performance with the product-level API. The code (https://github.com/OFA-Sys/OFA) and demo (https://modelscope.cn/studios/damo/ofa_ocr_pipeline/summary) are publicly available.
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
在本文中,我们在CAMRP-2022评估中提供了对系统的详细说明。我们首先提出了一种两阶段的方法来进行中文AMR解析,并产生对齐,其中包括概念预测和关系预测阶段。我们的型号在CAMR 2.0测试集上获得了0.7756和0.7074对齐的F1分数,并单独使用CAMRP-2022的盲目测试集。我们还分析了结果和限制,例如误差传播和阶级失衡问题,我们在当前方法中得出结论。代码和训练有素的模型将在https://github.com/pkunlp-icler/two-stage-camrp上发布,用于复制。
translated by 谷歌翻译
在大多数现实世界中的推荐方案中,多种行为(例如,单击,添加到购物车,采购等)的多类型,这对于学习用户的多方面偏好是有益的。由于多种类型的行为明确表现出依赖性,因此有效地对复杂行为依赖性建模对于多行为预测至关重要。最先进的多行为模型以所有历史互动为输入都没有区别地学习行为依赖性。但是,不同的行为可能反映了用户偏好的不同方面,这意味着某些无关的互动可能会像预测目标行为的声音一样发挥作用。为了解决上述局限性,我们向多行为建议介绍了多功能学习。更具体地说,我们提出了一种新颖的粗到五个知识增强的多功能学习(CKML)框架,以学习不同行为的共享和特定于行为的利益。 CKML引入了两个高级模块,即粗粒兴趣提取(CIE)和细粒度的行为相关性(FBC),它们共同起作用以捕获细粒度的行为依赖性。 CIE使用知识感知信息来提取每个兴趣的初始表示。 FBC结合了动态路由方案,以在兴趣之间进一步分配每个行为。此外,我们使用自我注意机制在兴趣水平上将不同的行为信息相关联。三个现实世界数据集的经验结果验证了我们模型在利用多行为数据方面的有效性和效率。进一步的实验证明了每个模块的有效性以及多行为数据共享和特定建模范式的鲁棒性和优越性。
translated by 谷歌翻译
深度学习的最新发展与压缩感应相结合,可以快速重建未采样的MR图像,并实现了笛卡尔K空间轨迹的最新性能。但是,在网络训练的每次迭代中,需要将非科学家轨迹(例如径向轨迹)转换为笛卡尔网格,从而减慢了训练过程,并在训练过程中带来了不便和延迟。网络中非均匀傅立叶变换的多个迭代抵消了快速推理的深度学习优势。当前的方法通常在图像到图像网络上工作,或者在网络训练之前将非科学家轨迹网格网格,以避免重复的格栅过程。但是,图像到图像网络无法确保重建图像中的k空间数据一致性和非 - 牙犯K空间的预处理导致网格错误,而网络培训无法补偿。受到变压器网络以处理序列转导任务的远程依赖性的启发,我们建议根据采集的时间顺序重新排列径向辐条到顺序数据,并使用变压器预测从获得的辐射辐射。我们提出了新的数据增强方法,以从有限数量的受试者中生成大量培训数据。该网络可以生成不同的解剖结构。实验结果表明,与最先进的深神经网络相比,所提出的框架的性能卓越。
translated by 谷歌翻译
语言模型既展示了定量的改进,又展示了新的定性功能,随着规模的增加。尽管它们具有潜在的变革性影响,但这些新能力的特征却很差。为了为未来的研究提供信息,为破坏性的新模型能力做准备,并改善社会有害的效果,至关重要的是,我们必须了解目前和近乎未来的能力和语言模型的局限性。为了应对这一挑战,我们介绍了超越模仿游戏基准(Big Bench)。 Big Bench目前由204个任务组成,由132家机构的442位作者贡献。任务主题是多样的,从语言学,儿童发展,数学,常识性推理,生物学,物理学,社会偏见,软件开发等等。 Big-Bench专注于被认为超出当前语言模型的功能的任务。我们评估了OpenAI的GPT型号,Google内部密集变压器体系结构和大型基础上的开关稀疏变压器的行为,跨越了数百万到数十亿个参数。此外,一个人类专家评估者团队执行了所有任务,以提供强大的基准。研究结果包括:模型性能和校准都随规模改善,但绝对的术语(以及与评估者的性能相比);在模型类中的性能非常相似,尽管带有稀疏性。逐渐和预测的任务通常涉及大量知识或记忆成分,而在临界规模上表现出“突破性”行为的任务通常涉及多个步骤或组成部分或脆性指标;社交偏见通常会随着含糊不清的环境而随着规模而增加,但这可以通过提示来改善。
translated by 谷歌翻译
尽管基于经常性的神经网络(RNN)的视频预测方法已经取得了重大成就,但由于信息损失问题和基于知觉的卑鄙平方错误(MSE)损失功能,它们在具有高分辨率的数据集中的性能仍然远远不令人满意。 。在本文中,我们提出了一个时空信息保存和感知声明模型(STIP),以解决上述两个问题。为了解决信息损失问题,提出的模型旨在在功能提取和状态过渡期间分别保留视频的时空信息。首先,基于X-NET结构设计了多透明时空自动编码器(MGST-AE)。拟议的MGST-AE可以帮助解码器回忆到时间和空间域中编码器的多透明信息。这样,在高分辨率视频的功能提取过程中,可以保留更多时空信息。其次,时空门控复发单元(STGRU)是基于标准的封闭式复发单元(GRU)结构而设计的,该结构可以在状态过渡期间有效地保留时空信息。与流行的长期短期(LSTM)的预测记忆相比,提出的STGRU可以通过计算负载较低的计算负载来实现更令人满意的性能。此外,为了改善传统的MSE损失功能,基于生成的对抗网络(GAN)进一步设计了学识渊博的知觉损失(LP-loss),这可以帮助获得客观质量和感知质量之间的令人满意的权衡。实验结果表明,与各种最先进的方法相比,提出的Stip可以预测具有更令人满意的视觉质量的视频。源代码已在\ url {https://github.com/zhengchang467/stiphr}上获得。
translated by 谷歌翻译
语音中的自我监督学习涉及在大规模的未注释的语音语料库上训练语音表示网络,然后将学习的表示形式应用于下游任务。由于语音中SSL学习的大多数下游任务主要集中在语音中的内容信息上,因此最理想的语音表示形式应该能够将不需要的变化(例如说话者的变化)从内容中删除。但是,解开扬声器非常具有挑战性,因为删除说话者的信息也很容易导致内容丢失,而后者的损害通常远远超过了前者的好处。在本文中,我们提出了一种新的SSL方法,该方法可以实现扬声器分解而不会严重丢失内容。我们的方法是根据休伯特框架改编的,并结合了解开机制,以使教师标签和博学的代表规范化。我们在一组与内容相关的下游任务上评估了说话者分解的好处,并观察到我们的扬声器示词表示的一致且著名的性能优势。
translated by 谷歌翻译