设备的端到端(E2E)模型已显示出对质量和延迟的英语语音搜索任务的常规模型的改进。 E2E模型还显示了多语言自动语音识别(ASR)的有希望的结果。在本文中,我们将以前的容量解决方案扩展到流应用程序,并提出流媒体多语言E2E ASR系统,该系统在设备上完全运行,质量和延迟与单个单语言模型相当。为了实现这一目标,我们提出了一个编码器端量模型和一个终端(EOU)联合层,以提高质量和延迟权衡。我们的系统以语言不可知论的方式构建,允许它实时支持本条件的代码切换。为了解决大型模型的可行性问题,我们进行了设备分析,并用最近开发的嵌入解码器代替了耗时的LSTM解码器。通过这些更改,我们设法在不到实时的时间内在移动设备上运行了这样的系统。
translated by 谷歌翻译
语言模型融合可帮助智能助手识别声学数据中很少见的单词,但在仅文本语料库中很丰富(键入搜索日志)。但是,这样的语料库具有阻碍下游性能的属性,包括(1)太大,(2)困扰域不匹配的内容,以及(3)重头而不是重型尾巴(很多重复的搜索查询,例如“例如”天气”)。我们表明,选择语言建模数据的三种简单策略可以极大地改善稀有单词的识别,而不会损害整体表现。首先,为了解决重头体,我们根据软日志功能将数据置于示例,从而减少了高频(头)句子。其次,为了鼓励罕见的暴露,我们明确过滤了声学数据中罕见的单词。最后,我们通过基于困惑的对比选择来解决域 - 不匹配,对与目标域相匹配的示例过滤。我们将大量的Web搜索查询量下降了53倍,并获得比没有下调的更好的LM困惑。当使用最先进的生产语音引擎浅融合时,与在RAW COPPUS上训练的基线LM相比,我们的LM在稀有句子上的相对量最多可相对24%(没有整体上) 。通过对现场语音搜索流量进行有利的并排评估,进一步验证了这些收益。
translated by 谷歌翻译
在学习动作识别中,模型通常预先接受对象识别,例如图像,例如想象成,稍后在与视频的目标动作识别上微调。这种方法造成了良好的经验性能,特别是最近的基于变压器的视频架构。虽然最近许多作品旨在为行动识别设计更先进的变压器架构,但如何训练视频变压器的努力。在这项工作中,我们探索了几种培训范式并提出了两个结果。首先,视频变压器受益于各种视频数据集和标签空间的联合培训(例如,动力学是关注的,而某些东西是以运动为中心的)。其次,通过进一步与图像共同训练(作为单帧视频),视频变换器学习更好的视频表示。我们将这种方法作为用于行动识别的共同培训视频和图像(封面)。特别是,当基于时序形式的架构上的ImageNet-21k上掠夺时,盖子将动力学-400的前1个精度提高2.4%,动力学-600以2.3%,有些东西-V2达2.3%。当以前最先进的较大刻度图像数据集预先磨削时,覆盖覆盖在动力学-400(87.2%),动力学-600(87.9%),动力学-700(79.8%),有些内容达到最佳结果(70.9%),和时刻 - 时间(46.1%),具有简单的时空视频变压器。
translated by 谷歌翻译
我们总结了使用巨大的自动语音识别(ASR)模型的大量努力的结果,该模型使用包含大约一百万小时音频的大型,多样的未标记数据集进行了预训练。我们发现,即使对于拥有数万个小时的标记数据的非常大的任务,预训练,自我培训和扩大模型大小的组合也大大提高了数据效率。特别是,在具有34K小时标记数据的ASR任务上,通过微调80亿个参数预先训练的构象异构体模型,我们可以匹配最先进的(SOTA)性能(SOTA)的性能,只有3%的培训数据和通过完整的训练集可以显着改善SOTA。我们还报告了从使用大型预训练和自我训练的模型来完成一系列下游任务所获得的普遍利益,这些任务涵盖了广泛的语音域,并涵盖了多个数据集大小的大小,包括在许多人中获得SOTA性能公共基准。此外,我们利用预先训练的网络的学会表示,在非ASR任务上实现SOTA结果。
translated by 谷歌翻译
我们呈现GSPMD,一种用于公共机器学习计算的自动,基于编译的并行化系统。它允许用户以与单个设备的方式相同的方式编写程序,然后通过关于如何分发Tensors的一些注释来提供提示,基于哪个GSPMD将并行化计算。其分区的表示简单尚不一般,允许它在各种模型上表达并行性的不同或混合范式。GSPMD基于有限的用户注释为每个运算符的分区Inventing,使得缩放现有的单设备程序方便。它解决了生产使用的几种技术挑战,允许GSPMD实现50%至62%的计算利用率,用于高达2048个云TPUv3核心,适用于高达1万亿参数的模型。
translated by 谷歌翻译
我们利用Libri-Light数据集的未标记音频来获得半监督学习中最新的发展的最新发展,以获得自动语音识别的最新结果。更确切地说,我们使用使用WAV2VEC 2.0预训练的巨型构象模型进行了嘈杂的学生培训,并使用巨型构象模型进行了训练。通过这样做,我们能够在Librispeech测试/测试中获得1.4%/2.6%的单词率率(WERS),而目前的最新设备为1.7%/3.3%。
translated by 谷歌翻译
Recently Transformer and Convolution neural network (CNN) based models have shown promising results in Automatic Speech Recognition (ASR), outperforming Recurrent neural networks (RNNs). Transformer models are good at capturing content-based global interactions, while CNNs exploit local features effectively. In this work, we achieve the best of both worlds by studying how to combine convolution neural networks and transformers to model both local and global dependencies of an audio sequence in a parameter-efficient way. To this regard, we propose the convolution-augmented transformer for speech recognition, named Conformer. Conformer significantly outperforms the previous Transformer and CNN based models achieving state-of-the-art accuracies. On the widely used LibriSpeech benchmark, our model achieves WER of 2.1%/4.3% without using a language model and 1.9%/3.9% with an external language model on test/testother. We also observe competitive performance of 2.7%/6.3% with a small model of only 10M parameters.
translated by 谷歌翻译
Model efficiency has become increasingly important in computer vision. In this paper, we systematically study neural network architecture design choices for object detection and propose several key optimizations to improve efficiency. First, we propose a weighted bi-directional feature pyramid network (BiFPN), which allows easy and fast multiscale feature fusion; Second, we propose a compound scaling method that uniformly scales the resolution, depth, and width for all backbone, feature network, and box/class prediction networks at the same time. Based on these optimizations and better backbones, we have developed a new family of object detectors, called EfficientDet, which consistently achieve much better efficiency than prior art across a wide spectrum of resource constraints. In particular, with singlemodel and single-scale, our EfficientDet-D7 achieves stateof-the-art 55.1 AP on COCO test-dev with 77M parameters and 410B FLOPs 1 , being 4x -9x smaller and using 13x -42x fewer FLOPs than previous detectors. Code is available at https://github.com/google/automl/tree/ master/efficientdet.
translated by 谷歌翻译
We present the next generation of MobileNets based on a combination of complementary search techniques as well as a novel architecture design. MobileNetV3 is tuned to mobile phone CPUs through a combination of hardwareaware network architecture search (NAS) complemented by the NetAdapt algorithm and then subsequently improved through novel architecture advances. This paper starts the exploration of how automated search algorithms and network design can work together to harness complementary approaches improving the overall state of the art. Through this process we create two new MobileNet models for release: MobileNetV3-Large and MobileNetV3-Small which are targeted for high and low resource use cases. These models are then adapted and applied to the tasks of object detection and semantic segmentation. For the task of semantic segmentation (or any dense pixel prediction), we propose a new efficient segmentation decoder Lite Reduced Atrous Spatial Pyramid Pooling (LR-ASPP). We achieve new state of the art results for mobile classification, detection and segmentation. MobileNetV3-Large is 3.2% more accurate on ImageNet classification while reducing latency by 20% compared to MobileNetV2. MobileNetV3-Small is 6.6% more accurate compared to a MobileNetV2 model with comparable latency. MobileNetV3-Large detection is over 25% faster at roughly the same accuracy as Mo-bileNetV2 on COCO detection. MobileNetV3-Large LR-ASPP is 34% faster than MobileNetV2 R-ASPP at similar accuracy for Cityscapes segmentation.
translated by 谷歌翻译
Current state-of-the-art convolutional architectures for object detection are manually designed. Here we aim to learn a better architecture of feature pyramid network for object detection. We adopt Neural Architecture Search and discover a new feature pyramid architecture in a novel scalable search space covering all cross-scale connections. The discovered architecture, named NAS-FPN, consists of a combination of top-down and bottom-up connections to fuse features across scales. NAS-FPN, combined with various backbone models in the RetinaNet framework, achieves better accuracy and latency tradeoff compared to state-ofthe-art object detection models. NAS-FPN improves mobile detection accuracy by 2 AP compared to state-of-the-art SS-DLite with MobileNetV2 model in [32] and achieves 48.3 AP which surpasses Mask R-CNN [10] detection accuracy with less computation time.
translated by 谷歌翻译