用于评估有条件自然语言生成的传统自动化指标使用单个生成的文本和最佳匹配的金标准地面真相文本之间的成对比较。当有多个基础真相可用时,分数将使用参考中的平均或最大操作进行汇总。尽管这种方法在地面真相数据中的多样性(即有条件文本的分布的分散)可以归因于噪声,例如自动语音识别中,但在地面上的多样性的情况下,它不允许进行强有力的评估。真理代表模型的信号。在这项工作中,我们认为现有的指标不适合诸如视觉描述或摘要之类的域,而地面真理在语义上是多样的,并且这些字幕中的多样性捕获了有关上下文的有用的其他信息。我们提出了一种新的范式,用于对条件语言生成模型的多键入评估以及一个新的指标家族,该指标家族使用每种少量样本集比较参考和模型生成的字幕集的分布。我们通过视觉描述中的案例研究证明了方法的实用性:我们在其中证明现有模型优化了单描述质量而不是多样性,并获得了对采样方法和温度影响如何描述质量和多样性的一些见解。
translated by 谷歌翻译
机器学习和认知科学的最新工作表明,了解因果信息对于智力的发展至关重要。使用``Blicket otter''环境的认知科学的广泛文献表明,孩子们擅长多种因果推理和学习。我们建议将该环境适应机器​​学习代理。当前机器学习算法的关键挑战之一是建模和理解因果关系:关于因果关系集的可转移抽象假设。相比之下,即使是幼儿也会自发学习和使用因果关系。在这项工作中,我们提出了一个新的基准 - 一种灵活的环境,可以评估可变因果溢出物下的现有技术 - 并证明许多现有的最新方法在这种环境中概括了困难。该基准的代码和资源可在https://github.com/cannylab/casual_overhypothess上获得。
translated by 谷歌翻译
深度神经网络在很大程度上证明了他们通过从输入音频帧中提取有意义的功能来执行自动语音识别(ASR)的能力。但是,此类功能不仅包括有关口语内容的信息,而且还可能包含有关不必要上下文的信息,例如背景噪声和声音或说话者身份,口音或受保护的属性。这样的信息可以通过引入口头词与说出此类词的上下文之间的虚假相关性来直接损害概括性能。在这项工作中,我们介绍了一种无监督的,编码的方法,用于将语音编码器描述为明确的内容编码表示和虚假的上下文编码表示形式。通过这样做,我们证明了标准ASR基准的性能提高,并在现实世界和人为嘈杂的ASR方案中的性能提高。
translated by 谷歌翻译
传统上,自动语音识别的研究重点是对音频表示的本地首选编码,以预测话语中的语音。不幸的是,依靠此类超本地信息的方法往往容易受到本地级腐败(例如音频框架掉落或大声的噪音)和全球级别的噪音(例如环境噪音或背景噪音)在训练期间看到。在这项工作中,我们介绍了一种新颖的方法,该方法利用了基于掩盖语言建模的自我监督的学习技术来计算对话语发生的环境的全球多模式编码。然后,我们使用一个新的深融合框架将这种全局上下文集成到传统的ASR方法中,并证明所得的方法可以在LibrisPeech上胜过高达7%的基线方法;内部数据集的收益范围从6%(较大型号)到45%(在较小的型号上)。
translated by 谷歌翻译
城市地区消耗了世界上三分之二的能源,占全球二氧化碳排放量的70%以上。正如IPCC全球预热的1.5C报告所述,到2050年实现碳中型需要清楚地了解城市几何形状。卫星图像的高质量建筑占地面积可以加速这一预测过程和授权在规模上的授权市决策。然而,以前的深度学习的方法面临相应的问题,例如缩放不变性和缺陷的足迹,部分原因是由于持续存在的类别不平衡。此外,大多数方法都需要补充数据,例如点云数据,建筑物高度信息和多频段图像 - 这具有有限的可用性并且产生乏味。在本文中,我们提出了一种改进的Deeplabv3 +模块,其具有扩张的REN底座骨架,仅产生从三声道RGB卫星图像的建筑占地面积的掩模。此外,我们在客观函数中引入了F-Beta测量,以帮助模型账户进行偏斜类分布,并防止假阳性占地面积。除F-Beta之外,我们还纳入了指数加权的边界损失,并使用跨数据集培训策略来进一步提高预测的质量。因此,我们跨越三个公共基准实现最先进的表演,并证明我们的RGB方法产生更高质量的视觉结果,并且对卫星图像的规模,分辨率和城市密度不可知。
translated by 谷歌翻译
The recent increase in public and academic interest in preserving biodiversity has led to the growth of the field of conservation technology. This field involves designing and constructing tools that utilize technology to aid in the conservation of wildlife. In this article, we will use case studies to demonstrate the importance of designing conservation tools with human-wildlife interaction in mind and provide a framework for creating successful tools. These case studies include a range of complexities, from simple cat collars to machine learning and game theory methodologies. Our goal is to introduce and inform current and future researchers in the field of conservation technology and provide references for educating the next generation of conservation technologists. Conservation technology not only has the potential to benefit biodiversity but also has broader impacts on fields such as sustainability and environmental protection. By using innovative technologies to address conservation challenges, we can find more effective and efficient solutions to protect and preserve our planet's resources.
translated by 谷歌翻译
Variational autoencoders model high-dimensional data by positing low-dimensional latent variables that are mapped through a flexible distribution parametrized by a neural network. Unfortunately, variational autoencoders often suffer from posterior collapse: the posterior of the latent variables is equal to its prior, rendering the variational autoencoder useless as a means to produce meaningful representations. Existing approaches to posterior collapse often attribute it to the use of neural networks or optimization issues due to variational approximation. In this paper, we consider posterior collapse as a problem of latent variable non-identifiability. We prove that the posterior collapses if and only if the latent variables are non-identifiable in the generative model. This fact implies that posterior collapse is not a phenomenon specific to the use of flexible distributions or approximate inference. Rather, it can occur in classical probabilistic models even with exact inference, which we also demonstrate. Based on these results, we propose a class of latent-identifiable variational autoencoders, deep generative models which enforce identifiability without sacrificing flexibility. This model class resolves the problem of latent variable non-identifiability by leveraging bijective Brenier maps and parameterizing them with input convex neural networks, without special variational inference objectives or optimization tricks. Across synthetic and real datasets, latent-identifiable variational autoencoders outperform existing methods in mitigating posterior collapse and providing meaningful representations of the data.
translated by 谷歌翻译
Artificial Intelligence (AI) has become commonplace to solve routine everyday tasks. Because of the exponential growth in medical imaging data volume and complexity, the workload on radiologists is steadily increasing. We project that the gap between the number of imaging exams and the number of expert radiologist readers required to cover this increase will continue to expand, consequently introducing a demand for AI-based tools that improve the efficiency with which radiologists can comfortably interpret these exams. AI has been shown to improve efficiency in medical-image generation, processing, and interpretation, and a variety of such AI models have been developed across research labs worldwide. However, very few of these, if any, find their way into routine clinical use, a discrepancy that reflects the divide between AI research and successful AI translation. To address the barrier to clinical deployment, we have formed MONAI Consortium, an open-source community which is building standards for AI deployment in healthcare institutions, and developing tools and infrastructure to facilitate their implementation. This report represents several years of weekly discussions and hands-on problem solving experience by groups of industry experts and clinicians in the MONAI Consortium. We identify barriers between AI-model development in research labs and subsequent clinical deployment and propose solutions. Our report provides guidance on processes which take an imaging AI model from development to clinical implementation in a healthcare institution. We discuss various AI integration points in a clinical Radiology workflow. We also present a taxonomy of Radiology AI use-cases. Through this report, we intend to educate the stakeholders in healthcare and AI (AI researchers, radiologists, imaging informaticists, and regulators) about cross-disciplinary challenges and possible solutions.
translated by 谷歌翻译
Wearable sensors for measuring head kinematics can be noisy due to imperfect interfaces with the body. Mouthguards are used to measure head kinematics during impacts in traumatic brain injury (TBI) studies, but deviations from reference kinematics can still occur due to potential looseness. In this study, deep learning is used to compensate for the imperfect interface and improve measurement accuracy. A set of one-dimensional convolutional neural network (1D-CNN) models was developed to denoise mouthguard kinematics measurements along three spatial axes of linear acceleration and angular velocity. The denoised kinematics had significantly reduced errors compared to reference kinematics, and reduced errors in brain injury criteria and tissue strain and strain rate calculated via finite element modeling. The 1D-CNN models were also tested on an on-field dataset of college football impacts and a post-mortem human subject dataset, with similar denoising effects observed. The models can be used to improve detection of head impacts and TBI risk evaluation, and potentially extended to other sensors measuring kinematics.
translated by 谷歌翻译
The BLOOM model is a large open-source multilingual language model capable of zero-shot learning, but its pretraining was limited to 46 languages. To improve its zero-shot performance on unseen languages, it is desirable to adapt BLOOM, but previous works have only explored adapting small language models. In this work, we apply existing language adaptation strategies to BLOOM and benchmark its zero-shot prompting performance on eight new languages. We find language adaptation to be effective at improving zero-shot performance in new languages. Surprisingly, adapter-based finetuning is more effective than continued pretraining for large models. In addition, we discover that prompting performance is not significantly affected by language specifics, such as the writing system. It is primarily determined by the size of the language adaptation data. We also add new languages to BLOOMZ, which is a multitask finetuned version of BLOOM capable of following task instructions zero-shot. We find including a new language in the multitask fine-tuning mixture to be the most effective method to teach BLOOMZ a new language. We conclude that with sufficient training data language adaptation can generalize well to diverse languages. Our code is available at \url{https://github.com/bigscience-workshop/multilingual-modeling/}.
translated by 谷歌翻译