由于具有强大的代表性,变形金刚在包括自然语言处理(NLP),计算机视觉和语音识别在内的广泛应用中越来越受欢迎。但是,利用这种代表性的能力有效地需要大量的数据,强大的正则化或两者兼而有之以减轻过度拟合。最近,基于掩盖的自动编码器的自我监督预处理策略已解锁了变压器的功能,这些策略依赖于直接或从未掩盖的内容对比的掩蔽输入进行重建。这种预训练的策略已在NLP中的BERT模型,Speak2VEC模型中使用,最近在Vision中的MAE模型中,该模型迫使该模型使用自动编码相关的目标来了解输入不同部分中的内容之间的关系。在本文中,我们提出了一种小说但令人惊讶的简单替代内容,以预测内容的位置,而无需为其提供位置信息。这样做需要变压器仅凭内容就可以理解输入不同部分之间的位置关系。这相当于有效的实现,其中借口任务是每个输入令牌所有可能位置之间的分类问题。我们在视觉和语音基准上进行了实验,我们的方法对强有力的监督训练基准进行了改进,并且与现代的无监督/自我监督预审方法相媲美。我们的方法还可以使经过训练的变压器在没有位置嵌入的情况下胜过训练有完整位置信息的训练的变压器。
translated by 谷歌翻译
在培训期间应用的图像增强对于图像分类器的泛化性能至关重要。因此,大型研究已经专注于找到给定任务的最佳增强策略。然而,randaugment [2]是一个简单的随机增强策略,最近被证明胜过现有的复杂策略。只有对抗性自动化(Advaa)[11],一种基于对抗性培训的想法的方法,表明比争夺更好。在本文中,我们表明,与最佳的对抗方法相比,随机增强仍然是竞争力的,以及简单的课程,并猜测ADVAA的成功是由于政策控制器网络的随机性,这引入了一种温和的形式课程。
translated by 谷歌翻译
虽然最先进的对比自我监督学习(SSL)模型产生与监督对应物竞争的结果,但它们缺乏推断潜在变量的能力。相反,规定的潜在变量(LV)模型能够归因于不确定性,诱导任务特定压缩,并且通常允许更可解释的表示。在这项工作中,我们向大规模对比SSL模型引入LV近似值。我们证明,此添加可提高下游性能(导致96.42%和77.49%的测试在CIFAR10和ImageNet上的前1个微调性能,以及resnet50),并产生可用于解释性的高度压缩表示(588倍降低),分类和回归下游任务。
translated by 谷歌翻译
尽管近期寻求自我监督深度学习的许多技术的成功,但对最终学习的陈述有限调查。通过利用最近在对神经表示的比较方面的进步,我们通过比较对比自我监督算法在公共架构中的简单图像数据的监督中探讨了这种方向。我们发现该方法通过不同的手段来学习类似的中间陈述,并且表示在最终几层中迅速发散。我们调查这种分歧,发现这些层非常适合他们独特的学习目标。我们还发现对比物镜隐含地适合中间层的监督目标,但反向不是真的。我们的工作特别突出了学到的中间陈述的重要性,并提高了辅助任务设计的关键问题。
translated by 谷歌翻译
Videos are a rich source of multi-modal supervision. In this work, we learn representations using self-supervision by leveraging three modalities naturally present in videos: visual, audio and language streams. To this end, we introduce the notion of a multimodal versatile network -a network that can ingest multiple modalities and whose representations enable downstream tasks in multiple modalities. In particular, we explore how best to combine the modalities, such that fine-grained representations of the visual and audio modalities can be maintained, whilst also integrating text into a common embedding. Driven by versatility, we also introduce a novel process of deflation, so that the networks can be effortlessly applied to the visual data in the form of video or a static image. We demonstrate how such networks trained on large collections of unlabelled video data can be applied on video, video-text, image and audio tasks. Equipped with these representations, we obtain state-of-the-art performance on multiple challenging benchmarks including UCF101, HMDB51, Kinetics600, Audioset and ESC-50 when compared to previous self-supervised work. Our models are publicly available [1, 2, 3]. * Equal contribution. † Work done during an internship at DeepMind. 34th Conference on Neural Information Processing Systems (NeurIPS 2020),
translated by 谷歌翻译
While inferring common actor states (such as position or velocity) is an important and well-explored task of the perception system aboard a self-driving vehicle (SDV), it may not always provide sufficient information to the SDV. This is especially true in the case of active emergency vehicles (EVs), where light-based signals also need to be captured to provide a full context. We consider this problem and propose a sequential methodology for the detection of active EVs, using an off-the-shelf CNN model operating at a frame level and a downstream smoother that accounts for the temporal aspect of flashing EV lights. We also explore model improvements through data augmentation and training with additional hard samples.
translated by 谷歌翻译
Large language models (LLMs) have demonstrated impressive capabilities in natural language understanding and generation, but the quality bar for medical and clinical applications is high. Today, attempts to assess models' clinical knowledge typically rely on automated evaluations on limited benchmarks. There is no standard to evaluate model predictions and reasoning across a breadth of tasks. To address this, we present MultiMedQA, a benchmark combining six existing open question answering datasets spanning professional medical exams, research, and consumer queries; and HealthSearchQA, a new free-response dataset of medical questions searched online. We propose a framework for human evaluation of model answers along multiple axes including factuality, precision, possible harm, and bias. In addition, we evaluate PaLM (a 540-billion parameter LLM) and its instruction-tuned variant, Flan-PaLM, on MultiMedQA. Using a combination of prompting strategies, Flan-PaLM achieves state-of-the-art accuracy on every MultiMedQA multiple-choice dataset (MedQA, MedMCQA, PubMedQA, MMLU clinical topics), including 67.6% accuracy on MedQA (US Medical License Exam questions), surpassing prior state-of-the-art by over 17%. However, human evaluation reveals key gaps in Flan-PaLM responses. To resolve this we introduce instruction prompt tuning, a parameter-efficient approach for aligning LLMs to new domains using a few exemplars. The resulting model, Med-PaLM, performs encouragingly, but remains inferior to clinicians. We show that comprehension, recall of knowledge, and medical reasoning improve with model scale and instruction prompt tuning, suggesting the potential utility of LLMs in medicine. Our human evaluations reveal important limitations of today's models, reinforcing the importance of both evaluation frameworks and method development in creating safe, helpful LLM models for clinical applications.
translated by 谷歌翻译
A canonical algorithm for log-concave sampling is the Langevin Algorithm, aka the Langevin Diffusion run with some discretization stepsize $\eta > 0$. This discretization leads the Langevin Algorithm to have a stationary distribution $\pi_{\eta}$ which differs from the stationary distribution $\pi$ of the Langevin Diffusion, and it is an important challenge to understand whether the well-known properties of $\pi$ extend to $\pi_{\eta}$. In particular, while concentration properties such as isoperimetry and rapidly decaying tails are classically known for $\pi$, the analogous properties for $\pi_{\eta}$ are open questions with direct algorithmic implications. This note provides a first step in this direction by establishing concentration results for $\pi_{\eta}$ that mirror classical results for $\pi$. Specifically, we show that for any nontrivial stepsize $\eta > 0$, $\pi_{\eta}$ is sub-exponential (respectively, sub-Gaussian) when the potential is convex (respectively, strongly convex). Moreover, the concentration bounds we show are essentially tight. Key to our analysis is the use of a rotation-invariant moment generating function (aka Bessel function) to study the stationary dynamics of the Langevin Algorithm. This technique may be of independent interest because it enables directly analyzing the discrete-time stationary distribution $\pi_{\eta}$ without going through the continuous-time stationary distribution $\pi$ as an intermediary.
translated by 谷歌翻译
We explore the use of large language models (LLMs) for zero-shot semantic parsing. Semantic parsing involves mapping natural language utterances to task-specific meaning representations. Language models are generally trained on the publicly available text and code and cannot be expected to directly generalize to domain-specific parsing tasks in a zero-shot setting. In this work, we propose ZEROTOP, a zero-shot task-oriented parsing method that decomposes a semantic parsing problem into a set of abstractive and extractive question-answering (QA) problems, enabling us to leverage the ability of LLMs to zero-shot answer reading comprehension questions. For each utterance, we prompt the LLM with questions corresponding to its top-level intent and a set of slots and use the LLM generations to construct the target meaning representation. We observe that current LLMs fail to detect unanswerable questions; and as a result, cannot handle questions corresponding to missing slots. To address this problem, we fine-tune a language model on public QA datasets using synthetic negative samples. Experimental results show that our QA-based decomposition paired with the fine-tuned LLM can correctly parse ~16% of utterances in the MTOP dataset without requiring any annotated data.
translated by 谷歌翻译
Task-oriented dialogue systems often assist users with personal or confidential matters. For this reason, the developers of such a system are generally prohibited from observing actual usage. So how can they know where the system is failing and needs more training data or new functionality? In this work, we study ways in which realistic user utterances can be generated synthetically, to help increase the linguistic and functional coverage of the system, without compromising the privacy of actual users. To this end, we propose a two-stage Differentially Private (DP) generation method which first generates latent semantic parses, and then generates utterances based on the parses. Our proposed approach improves MAUVE by 3.8$\times$ and parse tree node-type overlap by 1.4$\times$ relative to current approaches for private synthetic data generation, improving both on fluency and semantic coverage. We further validate our approach on a realistic domain adaptation task of adding new functionality from private user data to a semantic parser, and show gains of 1.3$\times$ on its accuracy with the new feature.
translated by 谷歌翻译