Given a piece of text, a video clip and a reference audio, the movie dubbing (also known as visual voice clone V2C) task aims to generate speeches that match the speaker's emotion presented in the video using the desired speaker voice as reference. V2C is more challenging than conventional text-to-speech tasks as it additionally requires the generated speech to exactly match the varying emotions and speaking speed presented in the video. Unlike previous works, we propose a novel movie dubbing architecture to tackle these problems via hierarchical prosody modelling, which bridges the visual information to corresponding speech prosody from three aspects: lip, face, and scene. Specifically, we align lip movement to the speech duration, and convey facial expression to speech energy and pitch via attention mechanism based on valence and arousal representations inspired by recent psychology findings. Moreover, we design an emotion booster to capture the atmosphere from global video scenes. All these embeddings together are used to generate mel-spectrogram and then convert to speech waves via existing vocoder. Extensive experimental results on the Chem and V2C benchmark datasets demonstrate the favorable performance of the proposed method. The source code and trained models will be released to the public.
translated by 谷歌翻译
Self-supervised pre-training recently demonstrates success on large-scale multimodal data, and state-of-the-art contrastive learning methods often enforce the feature consistency from cross-modality inputs, such as video/audio or video/text pairs. Despite its convenience to formulate and leverage in practice, such cross-modality alignment (CMA) is only a weak and noisy supervision, since two modalities can be semantically misaligned even they are temporally aligned. For example, even in the commonly adopted instructional videos, a speaker can sometimes refer to something that is not visually present in the current frame; and the semantic misalignment would only be more unpredictable for the raw videos from the internet. We conjecture that might cause conflicts and biases among modalities, and may hence prohibit CMA from scaling up to training with larger and more heterogeneous data. This paper first verifies our conjecture by observing that, even in the latest VATT pre-training using only instructional videos, there exist strong gradient conflicts between different CMA losses within the same video, audio, text triplet, indicating them as the noisy source of supervision. We then propose to harmonize such gradients, via two techniques: (i) cross-modality gradient realignment: modifying different CMA loss gradients for each sample triplet, so that their gradient directions are more aligned; and (ii) gradient-based curriculum learning: leveraging the gradient conflict information on an indicator of sample noisiness, to develop a curriculum learning strategy to prioritize training on less noisy sample triplets. Applying those techniques to pre-training VATT on the HowTo100M dataset, we consistently improve its performance on different downstream tasks. Moreover, we are able to scale VATT pre-training to more complicated non-narrative Youtube8M dataset to further improve the state-of-the-arts.
translated by 谷歌翻译
立体类像素细分旨在通过左右视图将离散的像素分组为感知区域,以更加协作和高效地分组。现有的Superpixel分割算法主要利用颜色和空间特征作为输入,这可能会对空间信息施加强大的约束,同时利用立体声图像对的差异信息。为了减轻此问题,我们提出了一种立体声超级像素细分方法,并在本工作中具有空间信息的脱钩机制。为了解除立体视差信息和空间信息,在融合立体声图像对的特征之前,暂时删除空间信息,并提出了脱钩的立体声融合模块(DSFM),以处理立体声的特征特征特征对齐和遮挡问题。此外,由于空间信息对于超像素分割至关重要,因此我们进一步设计一个动态空间嵌入模块(DSEM)以重新添加空间信息,并且将通过DSEM中的DSEM进行自适应调整空间信息的权重(DF)用于实现更好的细分。全面的实验结果表明,我们的方法可以在KITTI2015和CityScapes数据集上实现最新性能,并且还可以在NJU2K数据集上的显着对象检测中验证效率。源代码将在接受纸张后公开提供。
translated by 谷歌翻译
将信息存储在DNA分子中引起了极大的兴趣,因为它在寿命,高存储密度和低维护成本方面具有优势。DNA储存管道中的关键步骤是根据其相似性有效地聚集了检索到的DNA序列。Levenshtein距离是两个DNA序列之间相似性的最合适的度量,但在计算复杂性方面较低,与成熟的聚类算法兼容。在这项工作中,我们建议使用暹罗神经网络,平方欧几里得嵌入和卡方回归,提出了一种新型的深方形欧几里德嵌入DNA序列。Levenshtein的距离通过嵌入向量之间的平方欧几里德距离近似,该矢量是快速计算的,并且群集算法友好。理论上和实验中分析了所提出的方法。结果表明,所提出的嵌入是有效且健壮的。
translated by 谷歌翻译
人类对象相互作用(HOI)识别的关键是推断人与物体之间的关系。最近,该图像的人类对象相互作用(HOI)检测取得了重大进展。但是,仍然有改善视频HOI检测性能的空间。现有的一阶段方法使用精心设计的端到端网络来检测视频段并直接预测交互。它使网络的模型学习和进一步的优化更加复杂。本文介绍了空间解析和动态时间池(SPDTP)网络,该网络将整个视频作为时空图作为人类和对象节点作为输入。与现有方法不同,我们提出的网络通过显式空间解析预测交互式和非相互作用对之间的差异,然后执行交互识别。此外,我们提出了一个可学习且可区分的动态时间模块(DTM),以强调视频的关键帧并抑制冗余帧。此外,实验结果表明,SPDTP可以更多地关注主动的人类对象对和有效的密钥帧。总体而言,我们在CAD-1220数据集和某些ELSE数据集上实现了最先进的性能。
translated by 谷歌翻译
相位函数是Monte Carlo(MC)仿真的光传播模型的关键元件,其通常配备有具有相关参数的分析功能。据报道,据报道,机器学习方法估计特定形式的相位函数的参数,例如Henyey-Greenstein相位功能,但是,对于我们的知识,没有进行研究以确定相位功能的形式。在这里,我们设计卷积神经网络,以估计来自漫反射光图像的相位函数而没有对相位函数的形式进行任何明确的假设。具体地,我们使用高斯混合模型作为示例来表示相位函数,并准确地学习模型参数。选择高斯混合模型,因为它提供了相位函数的分析表达,以便于MC模拟中促进偏转角采样,并且不会显着增加自由参数的数量。我们所提出的方法在典型的生物组织的MC模拟反射图像上使用不同的各向异性因子进行典型生物组织的MC模拟反射图像。分析了视野(FOV)和空间分辨率对误差的影响以优化估计方法。相位函数的平均平方误差为0.01,各向异性因子的相对误差为3.28%。
translated by 谷歌翻译
作为一个常见的图像编辑操作,图像组成旨在将前景从一个图像切割并粘贴在另一个图像上,从而产生复合图像。但是,有许多问题可能使复合图像不现实。这些问题可以总结为前景和背景之间的不一致,包括外观不一致(例如,不兼容的照明),几何不一致(例如不合理的大小)和语义不一致(例如,不匹配的语义上下文)。先前的作品将图像组成任务分为多个子任务,其中每个子任务在一个或多个问题上目标。具体而言,对象放置旨在为前景找到合理的比例,位置和形状。图像混合旨在解决前景和背景之间的不自然边界。图像协调旨在调整前景的照明统计数据。影子生成旨在为前景产生合理的阴影。通过将所有上述努力放在一起,我们可以获取现实的复合图像。据我们所知,以前没有关于图像组成的调查。在本文中,我们对图像组成的子任务进行了全面的调查。对于每个子任务,我们总结了传统方法,基于深度学习的方法,数据集和评估。我们还指出了每个子任务中现有方法的局限性以及整个图像组成任务的问题。图像组合的数据集和代码在https://github.com/bcmi/awesome-image-composition上进行了总结。
translated by 谷歌翻译
In recent years, mobile devices are equipped with increasingly advanced sensing and computing capabilities. Coupled with advancements in Deep Learning (DL), this opens up countless possibilities for meaningful applications, e.g., for medical purposes and in vehicular networks. Traditional cloudbased Machine Learning (ML) approaches require the data to be centralized in a cloud server or data center. However, this results in critical issues related to unacceptable latency and communication inefficiency. To this end, Mobile Edge Computing (MEC) has been proposed to bring intelligence closer to the edge, where data is produced. However, conventional enabling technologies for ML at mobile edge networks still require personal data to be shared with external parties, e.g., edge servers. Recently, in light of increasingly stringent data privacy legislations and growing privacy concerns, the concept of Federated Learning (FL) has been introduced. In FL, end devices use their local data to train an ML model required by the server. The end devices then send the model updates rather than raw data to the server for aggregation. FL can serve as an enabling technology in mobile edge networks since it enables the collaborative training of an ML model and also enables DL for mobile edge network optimization. However, in a large-scale and complex mobile edge network, heterogeneous devices with varying constraints are involved. This raises challenges of communication costs, resource allocation, and privacy and security in the implementation of FL at scale. In this survey, we begin with an introduction to the background and fundamentals of FL. Then, we highlight the aforementioned challenges of FL implementation and review existing solutions. Furthermore, we present the applications of FL for mobile edge network optimization. Finally, we discuss the important challenges and future research directions in FL.
translated by 谷歌翻译
Previous approaches for scene text detection have already achieved promising performances across various benchmarks. However, they usually fall short when dealing with challenging scenarios, even when equipped with deep neural network models, because the overall performance is determined by the interplay of multiple stages and components in the pipelines. In this work, we propose a simple yet powerful pipeline that yields fast and accurate text detection in natural scenes. The pipeline directly predicts words or text lines of arbitrary orientations and quadrilateral shapes in full images, eliminating unnecessary intermediate steps (e.g., candidate aggregation and word partitioning), with a single neural network. The simplicity of our pipeline allows concentrating efforts on designing loss functions and neural network architecture. Experiments on standard datasets including ICDAR 2015, COCO-Text and MSRA-TD500 demonstrate that the proposed algorithm significantly outperforms state-of-the-art methods in terms of both accuracy and efficiency. On the ICDAR 2015 dataset, the proposed algorithm achieves an F-score of 0.7820 at 13.2fps at 720p resolution.
translated by 谷歌翻译
This paper focuses on designing efficient models with low parameters and FLOPs for dense predictions. Even though CNN-based lightweight methods have achieved stunning results after years of research, trading-off model accuracy and constrained resources still need further improvements. This work rethinks the essential unity of efficient Inverted Residual Block in MobileNetv2 and effective Transformer in ViT, inductively abstracting a general concept of Meta-Mobile Block, and we argue that the specific instantiation is very important to model performance though sharing the same framework. Motivated by this phenomenon, we deduce a simple yet efficient modern \textbf{I}nverted \textbf{R}esidual \textbf{M}obile \textbf{B}lock (iRMB) for mobile applications, which absorbs CNN-like efficiency to model short-distance dependency and Transformer-like dynamic modeling capability to learn long-distance interactions. Furthermore, we design a ResNet-like 4-phase \textbf{E}fficient \textbf{MO}del (EMO) based only on a series of iRMBs for dense applications. Massive experiments on ImageNet-1K, COCO2017, and ADE20K benchmarks demonstrate the superiority of our EMO over state-of-the-art methods, \eg, our EMO-1M/2M/5M achieve 71.5, 75.1, and 78.4 Top-1 that surpass \textbf{SoTA} CNN-/Transformer-based models, while trading-off the model accuracy and efficiency well.
translated by 谷歌翻译