扩散模型(DMS)显示出高质量图像合成的巨大潜力。但是,当涉及到具有复杂场景的图像时,如何正确描述图像全局结构和对象细节仍然是一项具有挑战性的任务。在本文中,我们提出了弗里多(Frido),这是一种特征金字塔扩散模型,该模型执行了图像合成的多尺度粗到1个降解过程。我们的模型将输入图像分解为依赖比例的矢量量化特征,然后是用于产生图像输出的粗到细门。在上述多尺度表示阶段,可以进一步利用文本,场景图或图像布局等其他输入条件。因此,还可以将弗里多应用于条件或跨模式图像合成。我们对各种无条件和有条件的图像生成任务进行了广泛的实验,从文本到图像综合,布局到图像,场景环形图像到标签形象。更具体地说,我们在五个基准测试中获得了最先进的FID分数,即可可和开阔图像的布局到图像,可可和视觉基因组的场景环形图像以及可可的标签对图像图像。 。代码可在https://github.com/davidhalladay/frido上找到。
translated by 谷歌翻译
我们提出了GLIPV2,这是一个接地的VL理解模型,该模型既服务于本地化任务(例如,对象检测,实例分割)和视觉语言(VL)理解任务(例如VQA,图像字幕)。 GLIPV2优雅地将本地化预训练和视觉语言预训练(VLP)具有三个预训练任务:短语接地作为对检测任务的VL重新重新制定,区域词对比度学习作为新型的区域词对比度对比度对比学习任务,以及蒙面的语言建模。这种统一不仅简化了先前的多阶段VLP程序,而且还可以在本地化和理解任务之间实现相互利益。实验结果表明,在各种本地化和理解任务上,单个GLIPV2模型(所有模型权重)在SOTA性能附近实现。该模型还显示了(1)在开放式摄制对象检测任务上进行的强零射击和很少的自适应性能,以及(2)VL理解任务上的卓越接地能力。代码将在https://github.com/microsoft/glip上发布。
translated by 谷歌翻译
Contrastive language-image pretraining (CLIP) links vision and language modalities into a unified embedding space, yielding the tremendous potential for vision-language (VL) tasks. While early concurrent works have begun to study this potential on a subset of tasks, important questions remain: 1) What is the benefit of CLIP on unstudied VL tasks? 2) Does CLIP provide benefit in low-shot or domain-shifted scenarios? 3) Can CLIP improve existing approaches without impacting inference or pretraining complexity? In this work, we seek to answer these questions through two key contributions. First, we introduce an evaluation protocol that includes Visual Commonsense Reasoning (VCR), Visual Entailment (SNLI-VE), and Visual Question Answering (VQA), across a variety of data availability constraints and conditions of domain shift. Second, we propose an approach, named CLIP Targeted Distillation (CLIP-TD), to intelligently distill knowledge from CLIP into existing architectures using a dynamically weighted objective applied to adaptively selected tokens per instance. Experiments demonstrate that our proposed CLIP-TD leads to exceptional gains in the low-shot (up to 51.9%) and domain-shifted (up to 71.3%) conditions of VCR, while simultaneously improving performance under standard fully-supervised conditions (up to 2%), achieving state-of-art performance on VCR compared to other single models that are pretrained with image-text data only. On SNLI-VE, CLIP-TD produces significant gains in low-shot conditions (up to 6.6%) as well as fully supervised (up to 3%). On VQA, CLIP-TD provides improvement in low-shot (up to 9%), and in fully-supervised (up to 1.3%). Finally, CLIP-TD outperforms concurrent works utilizing CLIP for finetuning, as well as baseline naive distillation approaches. Code will be made available.
translated by 谷歌翻译
大规模预培训最近彻底改变了视野和语言(VL)研究。 LXMERT和Uniter等型号在广泛的VL任务中显着提升了最新技术。然而,这些模型中的大量参数在实践中阻碍了他们的应用。并行地,彩票假设(LTH)的工作表明,深度神经网络包含小匹配子网,可以在培训时达到比致密网络达到比例或更好的性能。在这项工作中,我们执行第一个实证研究,以评估这些可训练的子网是否也存在于预先训练的VL模型中。我们使用单位作为主测试用用作镜(也测试LXMERT和VILT),并整合7个代表VL任务进行实验,包括视觉问题应答,视觉致辞推理,视觉素食,参考表达理解,图像文本检索,GQA和NLVR $ ^ 2 $。通过综合分析,我们将主要结果总结如下。 ($ i $)很难找到严格匹配完整模型性能的子网。但是,我们可以在50%-70%的稀疏度下找到“轻松”赢得票价,维持99%的完整准确性。 ($ II $)由特定任务特定的修剪转移到其他任务的子网,而在培训前任务的普遍普遍转移锻炼预先培训任务,匹配的全部准确性为98%/ 96%所有任务的平均值。 ($ III $)除了统客外,其他型号如LXMERT和VILT也可以播放彩票。然而,我们为vilt获得的最高稀疏性远低于LXMERT和Uniter(30%与70%)。 ($ IV $)LTH在使用其他培训方法时也仍然相关(例如,对抗培训)。
translated by 谷歌翻译
在许多临床情况下,迫切需要具有自动呼吸声分析能力的可靠,遥远,连续的实时呼吸声监测仪,例如在监测2019年冠状病毒疾病的疾病进展中,以用手持式听觉仪替换常规的听诊。但是,在实际应用中尚未验证强大的计算机呼吸道声音分析算法。 In this study, we developed a lung sound database (HF_Lung_V1) comprising 9,765 audio files of lung sounds (duration of 15 s each), 34,095 inhalation labels, 18,349 exhalation labels, 13,883 continuous adventitious sound (CAS) labels (comprising 8,457 wheeze labels, 686个Stridor标签和4,740个Rhonchi标签)和15,606个不连续的不定声标签(所有crack带)。我们进行了长期短期记忆(LSTM),门控复发单元(GRU),双向LSTM(BILSTM),双向GRU(BIGRU),卷积神经网络(CNN)-LSTM,CNN-GRU,CNN-BILSTM,CNN-BILSTM,CNN-BILSTM,CNN-BILSTM,CNN-GRU,我们进行了基准测试。和CNN-BIGRU模型用于呼气阶段检测和不定声检测。我们还对基于LSTM的模型,单向模型和双向模型以及带有CNN和CNN的模型之间进行了性能比较。结果表明,这些模型在肺部声音分析中表现出足够的性能。在大多数定义任务中,基于GRU的模型在接收器操作特征曲线下的F1分数和区域上优于基于LSTM的模型。此外,所有双向模型的表现都优于其单向对应物。最后,添加CNN提高了肺部声音分析的准确性,尤其是在CAS检测任务中。
translated by 谷歌翻译
We present a large, tunable neural conversational response generation model, DIALOGPT (dialogue generative pre-trained transformer). Trained on 147M conversation-like exchanges extracted from Reddit comment chains over a period spanning from 2005 through 2017, DialoGPT extends the Hugging Face PyTorch transformer to attain a performance close to human both in terms of automatic and human evaluation in single-turn dialogue settings. We show that conversational systems that leverage DialoGPT generate more relevant, contentful and context-consistent responses than strong baseline systems. The pre-trained model and training pipeline are publicly released to facilitate research into neural response generation and the development of more intelligent opendomain dialogue systems.
translated by 谷歌翻译
Joint image-text embedding is the bedrock for most Visionand-Language (V+L) tasks, where multimodality inputs are simultaneously processed for joint visual and textual understanding. In this paper, we introduce UNITER, a UNiversal Image-TExt Representation, learned through large-scale pre-training over four image-text datasets (COCO, Visual Genome, Conceptual Captions, and SBU Captions), which can power heterogeneous downstream V+L tasks with joint multimodal embeddings. We design four pre-training tasks: Masked Language Modeling (MLM), Masked Region Modeling (MRM, with three variants), Image-Text Matching (ITM), and Word-Region Alignment (WRA). Different from previous work that applies joint random masking to both modalities, we use conditional masking on pre-training tasks (i.e., masked language/region modeling is conditioned on full observation of image/text). In addition to ITM for global image-text alignment, we also propose WRA via the use of Optimal Transport (OT) to explicitly encourage finegrained alignment between words and image regions during pre-training. Comprehensive analysis shows that both conditional masking and OTbased WRA contribute to better pre-training. We also conduct a thorough ablation study to find an optimal combination of pre-training tasks. Extensive experiments show that UNITER achieves new state of the art across six V+L tasks (over nine datasets), including Visual Question
translated by 谷歌翻译
Deep learning models can achieve high accuracy when trained on large amounts of labeled data. However, real-world scenarios often involve several challenges: Training data may become available in installments, may originate from multiple different domains, and may not contain labels for training. Certain settings, for instance medical applications, often involve further restrictions that prohibit retention of previously seen data due to privacy regulations. In this work, to address such challenges, we study unsupervised segmentation in continual learning scenarios that involve domain shift. To that end, we introduce GarDA (Generative Appearance Replay for continual Domain Adaptation), a generative-replay based approach that can adapt a segmentation model sequentially to new domains with unlabeled data. In contrast to single-step unsupervised domain adaptation (UDA), continual adaptation to a sequence of domains enables leveraging and consolidation of information from multiple domains. Unlike previous approaches in incremental UDA, our method does not require access to previously seen data, making it applicable in many practical scenarios. We evaluate GarDA on two datasets with different organs and modalities, where it substantially outperforms existing techniques.
translated by 谷歌翻译
The development of social media user stance detection and bot detection methods rely heavily on large-scale and high-quality benchmarks. However, in addition to low annotation quality, existing benchmarks generally have incomplete user relationships, suppressing graph-based account detection research. To address these issues, we propose a Multi-Relational Graph-Based Twitter Account Detection Benchmark (MGTAB), the first standardized graph-based benchmark for account detection. To our knowledge, MGTAB was built based on the largest original data in the field, with over 1.55 million users and 130 million tweets. MGTAB contains 10,199 expert-annotated users and 7 types of relationships, ensuring high-quality annotation and diversified relations. In MGTAB, we extracted the 20 user property features with the greatest information gain and user tweet features as the user features. In addition, we performed a thorough evaluation of MGTAB and other public datasets. Our experiments found that graph-based approaches are generally more effective than feature-based approaches and perform better when introducing multiple relations. By analyzing experiment results, we identify effective approaches for account detection and provide potential future research directions in this field. Our benchmark and standardized evaluation procedures are freely available at: https://github.com/GraphDetec/MGTAB.
translated by 谷歌翻译
As one of the prevalent methods to achieve automation systems, Imitation Learning (IL) presents a promising performance in a wide range of domains. However, despite the considerable improvement in policy performance, the corresponding research on the explainability of IL models is still limited. Inspired by the recent approaches in explainable artificial intelligence methods, we proposed a model-agnostic explaining framework for IL models called R2RISE. R2RISE aims to explain the overall policy performance with respect to the frames in demonstrations. It iteratively retrains the black-box IL model from the randomized masked demonstrations and uses the conventional evaluation outcome environment returns as the coefficient to build an importance map. We also conducted experiments to investigate three major questions concerning frames' importance equality, the effectiveness of the importance map, and connections between importance maps from different IL models. The result shows that R2RISE successfully distinguishes important frames from the demonstrations.
translated by 谷歌翻译