This paper proposes a new regularization algorithm referred to as macro-block dropout. The overfitting issue has been a difficult problem in training large neural network models. The dropout technique has proven to be simple yet very effective for regularization by preventing complex co-adaptations during training. In our work, we define a macro-block that contains a large number of units from the input to a Recurrent Neural Network (RNN). Rather than applying dropout to each unit, we apply random dropout to each macro-block. This algorithm has the effect of applying different drop out rates for each layer even if we keep a constant average dropout rate, which has better regularization effects. In our experiments using Recurrent Neural Network-Transducer (RNN-T), this algorithm shows relatively 4.30 % and 6.13 % Word Error Rates (WERs) improvement over the conventional dropout on LibriSpeech test-clean and test-other. With an Attention-based Encoder-Decoder (AED) model, this algorithm shows relatively 4.36 % and 5.85 % WERs improvement over the conventional dropout on the same test sets.
translated by 谷歌翻译
变形金刚已成为计算机视觉中的默认架构,但是了解驱动其预测的原因仍然是一个具有挑战性的问题。当前的解释方法依赖于注意值或输入梯度,但是这些方法对模型的依赖性有限。Shapley值在理论上提供了一种替代方案,但是它们的计算成本使它们对于大型高维模型不切实际。在这项工作中,我们旨在使Shapley价值观对视觉变压器(VIT)实用。为此,我们首先利用一种注意力掩盖方法来评估VIT的部分信息,然后我们开发了一种通过单独的,学习的解释器模型来生成Shapley价值解释的程序。我们的实验将沙普利值与许多基线方法(例如,注意推出,Gradcam,LRP)进行了比较,我们发现我们的方法提供了比任何现有的VIT方法更准确的解释。
translated by 谷歌翻译
Intonations play an important role in delivering the intention of a speaker. However, current end-to-end TTS systems often fail to model proper intonations. To alleviate this problem, we propose a novel, intuitive method to synthesize speech in different intonations using predefined intonation templates. Prior to TTS model training, speech data are grouped into intonation templates in an unsupervised manner. Two proposed modules are added to the end-to-end TTS framework: an intonation predictor and an intonation encoder. The intonation predictor recommends a suitable intonation template to the given text. The intonation encoder, attached to the text encoder output, synthesizes speech abiding the requested intonation template. Main contributions of our paper are: (a) an easy-to-use intonation control system covering a wide range of users; (b) better performance in wrapping speech in a requested intonation with improved objective and subjective evaluation; and (c) incorporating a pre-trained language model for intonation modelling. Audio samples are available at https://srtts.github.io/IntoTTS.
translated by 谷歌翻译
在本文中,我们提出了一种三阶段培训方法,提高低资源语言的语音识别准确性。我们探索并提出了一种有效的技术组合,如传输学习,编码器冻结,使用文本到语音(TTS)和半监督学习(SSL)。为了提高低资源意大利ASR的准确性,我们可以分别利用训练有素的英语模型,未标记的文本语料库和未标记的音频语料库,分别分别使用传输学习,TTS增强和SSL。在第一阶段,我们使用从训练有素的英语模型的转移学习。这主要有助于学习来自资源丰富的语言的声学信息。该阶段通过基线减少约24%的相对字错误率(WER)。在第二阶段,我们通过TTS数据增强利用未标记的文本数据来将语言信息合并到模型中。我们还在此阶段探索冻结声学编码器。 TTS数据增强有助于我们进一步减少〜21%相对〜21%。最后,在第三阶段,我们通过使用来自未标记的音频数据的SSL来减少另一个4%的相对。总体而言,我们的双通话识别系统在第一次通过的单调散文注意力(Mocha)和第二次通过的全部关注,相对于基线,减少了〜42%的WER。
translated by 谷歌翻译
同时翻译系统在处理输入流中的部分源句子时开始产生输出。这些系统需要决定何时读取更多输入以及何时编写输出。这些决定取决于源/目标语言的结构以及部分输入序列中包含的信息。因此,读/写决策策略在不同的输入方式(即语音和文本)中保持不变。这激发了我们利用与语音输入相对应的文本成绩单,以改善同时的语音到文本翻译(Simulst)。我们建议通过同时使用文本到文本翻译(SIMULMT)任务来改善Simulst系统的决策政策,以改善Simulst系统的决策政策。我们还将几种技术从离线语音翻译域扩展,以探索Simulmt任务在改善Simulst性能中的作用。总体而言,我们在不同的延迟制度(ENDE)SIMULST任务的不同延迟制度中取得了34.66% / 4.5 BLEU的改进。
translated by 谷歌翻译
我们提出了混合样品数据增强(MSDA)的第一个统一的理论分析,例如混合和cutmix。我们的理论结果表明,无论选择混合策略如何,MSDA都表现为基础训练损失的像素级正规化和第一层参数的正则化。同样,我们的理论结果支持MSDA培训策略可以改善与香草训练策略相比的对抗性鲁棒性和泛化。利用理论结果,我们对MSDA的不同设计选择的工作方式提供了高级了解。例如,我们表明,最流行的MSDA方法,混合和cutmix的表现不同,例如,CutMix通过像素距离正规化输入梯度,而混合量则使输入梯度正常于像素距离。我们的理论结果还表明,最佳MSDA策略取决于任务,数据集或模型参数。从这些观察结果中,我们提出了广义MSDA,这是混合版的混合和Cutmix(HMIX)和Gaussian Mixup(GMIX),简单的混合和CutMix。我们的实施可以利用混合和cutmix的优势,而我们的实施非常有效,并且计算成本几乎可以忽略为混合和cutmix。我们的实证研究表明,我们的HMIX和GMIX优于CIFAR-100和Imagenet分类任务中先前最先进的MSDA方法。源代码可从https://github.com/naver-ai/hmix-gmix获得
translated by 谷歌翻译
我们考虑在未知排列存在下存在的结构化张量的问题。这些数据问题通常在推荐系统,神经影像学,社区检测和多道比较应用中出现。在这里,我们开发了一般的平滑张量模型,直到任意指数排列;该模型包括流行的张量块模型和Lipschitz超图模型作为特殊情况。我们表明,块明智多项式家族中的约束最小二乘估计值实现了最小的误差。相对于最佳恢复所需的平滑度阈值,揭示了相变现象。特别是,我们发现高达$(m-2)(m + 1)/ 2 $的多项式,足以准确地恢复订单 - $ M $张力,而更高的程度则没有进一步的益处。这种现象揭示了具有和没有未知排列的平滑张量估计问题的内在区别。此外,我们提供了一种有效的多项式BORDA计数算法,可在单调性假设下可被证明可以实现最佳率。通过模拟和芝加哥犯罪数据分析证明了我们的程序的功效。
translated by 谷歌翻译
The 3D-aware image synthesis focuses on conserving spatial consistency besides generating high-resolution images with fine details. Recently, Neural Radiance Field (NeRF) has been introduced for synthesizing novel views with low computational cost and superior performance. While several works investigate a generative NeRF and show remarkable achievement, they cannot handle conditional and continuous feature manipulation in the generation procedure. In this work, we introduce a novel model, called Class-Continuous Conditional Generative NeRF ($\text{C}^{3}$G-NeRF), which can synthesize conditionally manipulated photorealistic 3D-consistent images by projecting conditional features to the generator and the discriminator. The proposed $\text{C}^{3}$G-NeRF is evaluated with three image datasets, AFHQ, CelebA, and Cars. As a result, our model shows strong 3D-consistency with fine details and smooth interpolation in conditional feature manipulation. For instance, $\text{C}^{3}$G-NeRF exhibits a Fr\'echet Inception Distance (FID) of 7.64 in 3D-aware face image synthesis with a $\text{128}^{2}$ resolution. Additionally, we provide FIDs of generated 3D-aware images of each class of the datasets as it is possible to synthesize class-conditional images with $\text{C}^{3}$G-NeRF.
translated by 谷歌翻译
In both terrestrial and marine ecology, physical tagging is a frequently used method to study population dynamics and behavior. However, such tagging techniques are increasingly being replaced by individual re-identification using image analysis. This paper introduces a contrastive learning-based model for identifying individuals. The model uses the first parts of the Inception v3 network, supported by a projection head, and we use contrastive learning to find similar or dissimilar image pairs from a collection of uniform photographs. We apply this technique for corkwing wrasse, Symphodus melops, an ecologically and commercially important fish species. Photos are taken during repeated catches of the same individuals from a wild population, where the intervals between individual sightings might range from a few days to several years. Our model achieves a one-shot accuracy of 0.35, a 5-shot accuracy of 0.56, and a 100-shot accuracy of 0.88, on our dataset.
translated by 谷歌翻译
Feature selection helps reduce data acquisition costs in ML, but the standard approach is to train models with static feature subsets. Here, we consider the dynamic feature selection (DFS) problem where a model sequentially queries features based on the presently available information. DFS is often addressed with reinforcement learning (RL), but we explore a simpler approach of greedily selecting features based on their conditional mutual information. This method is theoretically appealing but requires oracle access to the data distribution, so we develop a learning approach based on amortized optimization. The proposed method is shown to recover the greedy policy when trained to optimality and outperforms numerous existing feature selection methods in our experiments, thus validating it as a simple but powerful approach for this problem.
translated by 谷歌翻译