现代生成模型大致分为两个主要类别:(1)可以产生高质量随机样品但无法估算新数据点的确切密度的模型,以及(2)提供精确密度估计的模型,以样本为代价潜在空间的质量和紧凑性。在这项工作中,我们提出了LED,这是一种与gan密切相关的新生成模型,不仅允许有效采样,而且允许有效的密度估计。通过最大程度地提高对数可能的歧视器输出,我们得出了一个替代对抗优化目标,鼓励生成的数据多样性。这种表述提供了对几种流行生成模型之间关系的见解。此外,我们构建了一个基于流的生成器,该发电机可以计算生成样品的精确概率,同时允许低维度变量作为输入。我们在各种数据集上的实验结果表明,我们的密度估计器会产生准确的估计值,同时保留了生成的样品质量良好。
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Answering complex questions that require making latent decisions is a challenging task, especially when limited supervision is available. Recent works leverage the capabilities of large language models (LMs) to perform complex question answering in a few-shot setting by demonstrating how to output intermediate rationalizations while solving the complex question in a single pass. We introduce ``Successive Prompting'', where we iteratively break down a complex task into a simple task, solve it, and then repeat the process until we get the final solution. Successive prompting decouples the supervision for decomposing complex questions from the supervision for answering simple questions, allowing us to (1) have multiple opportunities to query in-context examples at each reasoning step (2) learn question decomposition separately from question answering, including using synthetic data, and (3) use bespoke (fine-tuned) components for reasoning steps where a large LM does not perform well. The intermediate supervision is typically manually written, which can be expensive to collect. We introduce a way to generate a synthetic dataset which can be used to bootstrap a model's ability to decompose and answer intermediate questions. Our best model (with successive prompting) achieves an improvement of ~5% absolute F1 on a few-shot version of the DROP dataset when compared with a state-of-the-art model with the same supervision.
translated by 谷歌翻译
Multi-Task Learning (MTL) has shown its importance at user products for fast training, data efficiency, reduced overfitting etc. MTL achieves it by sharing the network parameters and training a network for multiple tasks simultaneously. However, MTL does not provide the solution, if each task needs training from a different dataset. In order to solve the stated problem, we have proposed an architecture named TreeDNN along with it's training methodology. TreeDNN helps in training the model with multiple datasets simultaneously, where each branch of the tree may need a different training dataset. We have shown in the results that TreeDNN provides competitive performance with the advantage of reduced ROM requirement for parameter storage and increased responsiveness of the system by loading only specific branch at inference time.
translated by 谷歌翻译
为了扩大培训数据,研究人员通常希望合并两个或更多使用不同标签方案创建的数据集。本文考虑了两个数据集,这些数据集标记了不同标签方案下的词性词性(POS)标签,并利用一个数据集的监督标签,以帮助为另一个数据集生成标签。本文进一步讨论了这种方法的理论困难,并提出了一种新型的监督架构,该架构采用变压器来解决两个完全脱节数据集的问题。结果与最初的期望和探索探索不同,以使用与不同标签合并数据集的使用。
translated by 谷歌翻译
多种业务场景需要从结构化输入数据中自动生成描述性的人类可读文本。因此,已经开发了针对各种下游任务的事实到文本的系统主要是由于相关数据集的高可用性。直到最近,提出了跨语言事实与文本(XF2T)的问题,该问题是针对多种语言的生成,以及一个数据集,Xalign的八种语言。但是,实际上XF2T生成问题没有严格的工作。我们使用另外四种语言的注释数据扩展了Xalign数据集:旁遮普语,马拉雅拉姆语,阿萨姆语和Oriya。我们在扩展的多语言数据集上使用基于变压器的流行文本生成模型进行了广泛的研究,我们称之为Xalignv2。此外,我们研究了不同文本生成策略的性能:预处理,事实感知的嵌入和结构意识的输入编码的多种变化。我们的广泛实验表明,使用具有结构意识的输入编码的事实感知的嵌入式的多语言MT5模型可以平均在十二种语言中获得最佳结果。我们将代码,数据集和模型公开可用,并希望这将有助于进一步在此关键领域进行进一步的研究。
translated by 谷歌翻译
许多测量机器人和动态障碍状态的商品传感器具有非高斯噪声特征。然而,许多当前的方法将运动和感知的潜在不确定性视为高斯,主要是为了确保计算障碍。另一方面,与非高斯不确定性一起工作的现有计划者不会阐明运动和感知噪声的分布特征,例如偏见以避免有效碰撞。本文通过将避免反应性碰撞解释为碰撞约束违规与Dirac Delta分布之间的分配匹配问题来填补这一空白。为了确保策划者的快速反应性,我们将每个分布嵌入重现Hilbert空间,并将分布匹配重新匹配,以最大程度地减少两个分布之间的最大平均差异(MMD)。我们表明,评估给定对照输入的MMD归结为仅矩阵矩阵产品。我们利用这种见解来开发一种简单的控制抽样方法,以避免动态和不确定的障碍。我们在两个方面推进了最新的。首先,我们进行了广泛的实证研究,以表明我们的计划者可以从样本级别的信息中推断出分布偏差。因此,它使用此见解来指导机器人良好的同型。我们还强调了基本不确定性的高斯近似如何失去偏置估计值,并引导机器人以高碰撞概率为不利状态。其次,我们显示了与以前的非参数和高斯近似反应性碰撞避免碰撞的碰撞方法的拟议分布匹配方法的切实比较优势。
translated by 谷歌翻译
深度学习方法为多级医学图像细分实现了令人印象深刻的表现。但是,它们的编码不同类别(例如遏制和排除)之间拓扑相互作用的能力受到限制。这些约束自然出现在生物医学图像中,对于提高分割质量至关重要。在本文中,我们介绍了一个新型的拓扑交互模块,将拓扑相互作用编码为深神经网络。该实施完全基于卷积,因此非常有效。这使我们有能力将约束结合到端到端培训中,并丰富神经网络的功能表示。该方法的功效在不同类型的相互作用上得到了验证。我们还证明了该方法在2D和3D设置以及跨越CT和超声之类的不同模式中的专有和公共挑战数据集上的普遍性。代码可在以下网址找到:https://github.com/topoxlab/topointeraction
translated by 谷歌翻译
极端分类(XC)试图用最大的标签集中标记标签的子集标记数据点。通过使用稀疏,手工制作的功能的XC方法优越,用密集,学习的数据来进行深度XC,以数据点和标签的形式吸引了很多关注。负挖掘技术已成为所有深XC方法的关键组成部分,使它们可以扩展到数百万个标签。然而,尽管最近进步,但培训具有大型编码器体系结构(例如变形金刚)的深入XC模型仍然具有挑战性。本文确定,流行负面挖掘技术的内存通常迫使小型批量尺寸保持小且缓慢的训练。作为回应,本文介绍了Ngame,这是一种轻巧的迷你批次创建技术,可证明可证明准确的内部负面样品。这使得与现有负面采样技术相比,具有更大的迷你批次培训,提供更快的收敛性和更高的精度。发现Ngame的准确性比各种基准数据集的最先进方法要高16%,以进行极端分类,并且在回答搜索引擎查询以响应用户网页时检索搜索引擎查询更准确3%显示个性化广告。在流行搜索引擎的实时A/B测试中,Ngame在点击率率中的收益最高可达23%。
translated by 谷歌翻译
语言模型既展示了定量的改进,又展示了新的定性功能,随着规模的增加。尽管它们具有潜在的变革性影响,但这些新能力的特征却很差。为了为未来的研究提供信息,为破坏性的新模型能力做准备,并改善社会有害的效果,至关重要的是,我们必须了解目前和近乎未来的能力和语言模型的局限性。为了应对这一挑战,我们介绍了超越模仿游戏基准(Big Bench)。 Big Bench目前由204个任务组成,由132家机构的442位作者贡献。任务主题是多样的,从语言学,儿童发展,数学,常识性推理,生物学,物理学,社会偏见,软件开发等等。 Big-Bench专注于被认为超出当前语言模型的功能的任务。我们评估了OpenAI的GPT型号,Google内部密集变压器体系结构和大型基础上的开关稀疏变压器的行为,跨越了数百万到数十亿个参数。此外,一个人类专家评估者团队执行了所有任务,以提供强大的基准。研究结果包括:模型性能和校准都随规模改善,但绝对的术语(以及与评估者的性能相比);在模型类中的性能非常相似,尽管带有稀疏性。逐渐和预测的任务通常涉及大量知识或记忆成分,而在临界规模上表现出“突破性”行为的任务通常涉及多个步骤或组成部分或脆性指标;社交偏见通常会随着含糊不清的环境而随着规模而增加,但这可以通过提示来改善。
translated by 谷歌翻译