We present RAVEn, a self-supervised multi-modal approach to jointly learn visual and auditory speech representations. Our pre-training objective involves encoding masked inputs, and then predicting contextualised targets generated by slowly-evolving momentum encoders. Driven by the inherent differences between video and audio, our design is asymmetric w.r.t. the two modalities' pretext tasks: Whereas the auditory stream predicts both the visual and auditory targets, the visual one predicts only the auditory targets. We observe strong results in low- and high-resource labelled data settings when fine-tuning the visual and auditory encoders resulting from a single pre-training stage, in which the encoders are jointly trained. Notably, RAVEn surpasses all self-supervised methods on visual speech recognition (VSR) on LRS3, and combining RAVEn with self-training using only 30 hours of labelled data even outperforms a recent semi-supervised method trained on 90,000 hours of non-public data. At the same time, we achieve state-of-the-art results in the LRS3 low-resource setting for auditory speech recognition (as well as for VSR). Our findings point to the viability of learning powerful speech representations entirely from raw video and audio, i.e., without relying on handcrafted features. Code and models will be made public.
translated by 谷歌翻译
Recognizing a word shortly after it is spoken is an important requirement for automatic speech recognition (ASR) systems in real-world scenarios. As a result, a large body of work on streaming audio-only ASR models has been presented in the literature. However, streaming audio-visual automatic speech recognition (AV-ASR) has received little attention in earlier works. In this work, we propose a streaming AV-ASR system based on a hybrid connectionist temporal classification (CTC)/attention neural network architecture. The audio and the visual encoder neural networks are both based on the conformer architecture, which is made streamable using chunk-wise self-attention (CSA) and causal convolution. Streaming recognition with a decoder neural network is realized by using the triggered attention technique, which performs time-synchronous decoding with joint CTC/attention scoring. For frame-level ASR criteria, such as CTC, a synchronized response from the audio and visual encoders is critical for a joint AV decision making process. In this work, we propose a novel alignment regularization technique that promotes synchronization of the audio and visual encoder, which in turn results in better word error rates (WERs) at all SNR levels for streaming and offline AV-ASR models. The proposed AV-ASR model achieves WERs of 2.0% and 2.6% on the Lip Reading Sentences 3 (LRS3) dataset in an offline and online setup, respectively, which both present state-of-the-art results when no external training data are used.
translated by 谷歌翻译
最近,在一系列独立作品中提出了几种培训策略和时间模型,用于隔离单词唇读。但是,尚未探索结合最佳策略和调查每个策略的影响的潜力。在本文中,我们系统地研究了最先进的数据增强方法,时间模型和其他培训策略的性能,例如自我验证和使用单词边界指标。我们的结果表明,时间掩盖(TM)是最重要的增强,其次是混合和密集连接的时间卷积网络(DC-TCN)是隔离单词唇读的最佳时间模型。使用自我验证和单词边界指标也是有益的,但程度较小。上述所有方法的组合导致分类精度为93.4%,这比LRW数据集的当前最新性能的绝对提高了4.6%。通过预先培训其他数据集,可以将性能进一步提高到94.1%。对各种培训策略的错误分析表明,绩效通过提高难以认可词的分类准确性来提高。
translated by 谷歌翻译
鉴于探索性数据分析的日益普及(EDA),了解EDA获得的知识的基本原因至关重要,但仍未进行研究。这项研究首次促进了对数据分析的透明且可解释的观点,称为可解释的数据分析(XDA)。 XDA提供了有关因果和非因果语义的定性和定量解释的数据分析。这样,XDA将显着提高人类对数据分析结果的理解和信心,从而促进现实世界中准确的数据解释和决策。为此,我们提出Xinsight,这是XDA的一般框架。 Xinsight是一种旨在提取因果图,将因果原语转化为XDA语义的三模块,端到端管道,并量化每个解释对数据事实的定量贡献。 Xinsight使用一组设计概念和优化来解决与将因果集成到XDA中相关的固有困难。关于合成和现实世界数据集以及人类评估的实验证明了Xinsight的高度有希望的能力。
translated by 谷歌翻译
水生运动是生物学家和工程师感兴趣的经典流体结构相互作用(FSI)问题。求解完全耦合的FSI方程,用于不可压缩的Navier-Stokes和有限的弹性在计算上是昂贵的。在这种系统中,优化机器人游泳器设计通常涉及在已经昂贵的模拟之上繁琐的,无梯度的程序。为了应对这一挑战,我们提出了一种针对FSI的新颖,完全可区分的混合方法,该方法结合了2D直接数值模拟,用于游泳器的可变形固体结构和物理受限的神经网络替代物,以捕获流体的流体动力效应。对于游泳者身体的可变形实心模拟,我们使用来自计算机图形领域的最新技术来加快有限元方法(FEM)。对于流体模拟,我们使用经过基于物理损耗功能的U-NET体系结构来预测每个时间步骤的流场。使用沉浸式边界方法(IBM)在我们游泳器边界的边界周围采样了来自神经网络的压力和速度场输出,以准确有效地计算其游泳运动。我们证明了混合模拟器在2D Carangiform游泳器上的计算效率和可不同性。由于可怜性,该模拟器可用于通过基于直接梯度的优化浸入流体中的软体体系的控件设计。
translated by 谷歌翻译
本文提出了一种使用视频中心化的变压器在视频中面部聚类的新方法。以前的作品经常采用对比度学习来学习框架级表示,并使用平均池来汇总沿时间维度的特征。这种方法可能无法完全捕获复杂的视频动态。此外,尽管在基于视频的对比学习方面取得了最新进展,但很少有人试图学习一个自我监视的聚类友好的面部表现,从而使视频面部聚集任务受益。为了克服这些局限性,我们的方法采用了变压器直接学习视频级表示,可以更好地反映视频中面部的时间变化属性,而我们还建议一个以视频为中心的自我监督框架来训练变压器模型。我们还调查了以自我为中心视频的面部聚类,这是一个快速出现的领域,尚未在与面部聚类有关的作品中进行研究。为此,我们介绍并发布了第一个名为EasyCom-Clustering的大规模以egipentric视频群集群数据集。我们在广泛使用的大爆炸理论(BBT)数据集和新的easycom群集数据集上评估了我们的建议方法。结果表明,我们以视频为中心的变压器的性能超过了两个基准测试的所有先前最新方法,对面部视频表现出了自我牵强的理解。
translated by 谷歌翻译
Accurate simulation of soft mechanisms under dynamic actuation is critical for the design of soft robots. We address this gap with our differentiable simulation tool by learning the material parameters of our soft robotic fish. On the example of a soft robotic fish, we demonstrate an experimentally-verified, fast optimization pipeline for learning the material parameters from quasi-static data via differentiable simulation and apply it to the prediction of dynamic performance. Our method identifies physically plausible Young's moduli for various soft silicone elastomers and stiff acetal copolymers used in creation of our three different robotic fish tail designs. We show that our method is compatible with varying internal geometry of the actuators, such as the number of hollow cavities. Our framework allows high fidelity prediction of dynamic behavior for composite bi-morph bending structures in real hardware to millimeter-accuracy and within 3 percent error normalized to actuator length. We provide a differentiable and robust estimate of the thrust force using a neural network thrust predictor; this estimate allows for accurate modeling of our experimental setup measuring bollard pull. This work presents a prototypical hardware and simulation problem solved using our differentiable framework; the framework can be applied to higher dimensional parameter inference, learning control policies, and computational design due to its differentiable character.
translated by 谷歌翻译
视频到语音是从口语说话视频中重建音频演讲的过程。此任务的先前方法依赖于两个步骤的过程,该过程从视频中推断出中间表示,然后使用Vocoder或波形重建算法将中间表示形式解码为波形音频。在这项工作中,我们提出了一个基于生成对抗网络(GAN)的新的端到端视频到语音模型,该模型将口语视频转换为波形端到端,而无需使用任何中间表示或单独的波形合成算法。我们的模型由一个编码器架构组成,该体系结构接收原始视频作为输入并生成语音,然后将其馈送到波形评论家和权力评论家。基于这两个批评家的对抗损失的使用可以直接综合原始音频波形并确保其现实主义。此外,我们的三个比较损失的使用有助于建立生成的音频和输入视频之间的直接对应关系。我们表明,该模型能够用诸如网格之类的受约束数据集重建语音,并且是第一个为LRW(野外唇读)生成可理解的语音的端到端模型,以数百名扬声器为特色。完全记录在“野外”。我们使用四个客观指标来评估两种不同的情况下生成的样本,这些客观指标衡量了人工语音的质量和清晰度。我们证明,所提出的方法在Grid和LRW上的大多数指标上都优于以前的所有作品。
translated by 谷歌翻译
Knowledge graphs (KG) have served as the key component of various natural language processing applications. Commonsense knowledge graphs (CKG) are a special type of KG, where entities and relations are composed of free-form text. However, previous works in KG completion and CKG completion suffer from long-tail relations and newly-added relations which do not have many know triples for training. In light of this, few-shot KG completion (FKGC), which requires the strengths of graph representation learning and few-shot learning, has been proposed to challenge the problem of limited annotated data. In this paper, we comprehensively survey previous attempts on such tasks in the form of a series of methods and applications. Specifically, we first introduce FKGC challenges, commonly used KGs, and CKGs. Then we systematically categorize and summarize existing works in terms of the type of KGs and the methods. Finally, we present applications of FKGC models on prediction tasks in different areas and share our thoughts on future research directions of FKGC.
translated by 谷歌翻译
Unsupervised domain adaptation (UDA) for semantic segmentation is a promising task freeing people from heavy annotation work. However, domain discrepancies in low-level image statistics and high-level contexts compromise the segmentation performance over the target domain. A key idea to tackle this problem is to perform both image-level and feature-level adaptation jointly. Unfortunately, there is a lack of such unified approaches for UDA tasks in the existing literature. This paper proposes a novel UDA pipeline for semantic segmentation that unifies image-level and feature-level adaptation. Concretely, for image-level domain shifts, we propose a global photometric alignment module and a global texture alignment module that align images in the source and target domains in terms of image-level properties. For feature-level domain shifts, we perform global manifold alignment by projecting pixel features from both domains onto the feature manifold of the source domain; and we further regularize category centers in the source domain through a category-oriented triplet loss and perform target domain consistency regularization over augmented target domain images. Experimental results demonstrate that our pipeline significantly outperforms previous methods. In the commonly tested GTA5$\rightarrow$Cityscapes task, our proposed method using Deeplab V3+ as the backbone surpasses previous SOTA by 8%, achieving 58.2% in mIoU.
translated by 谷歌翻译