语音中的自我监督学习涉及在大规模的未注释的语音语料库上训练语音表示网络,然后将学习的表示形式应用于下游任务。由于语音中SSL学习的大多数下游任务主要集中在语音中的内容信息上,因此最理想的语音表示形式应该能够将不需要的变化(例如说话者的变化)从内容中删除。但是,解开扬声器非常具有挑战性,因为删除说话者的信息也很容易导致内容丢失,而后者的损害通常远远超过了前者的好处。在本文中,我们提出了一种新的SSL方法,该方法可以实现扬声器分解而不会严重丢失内容。我们的方法是根据休伯特框架改编的,并结合了解开机制,以使教师标签和博学的代表规范化。我们在一组与内容相关的下游任务上评估了说话者分解的好处,并观察到我们的扬声器示词表示的一致且著名的性能优势。
translated by 谷歌翻译
无监督的文本到语音综合(TTS)系统学会通过观察以下语言来生成与任何语言中任何书面句子相对应的语音波形:1)用该语言收集的未转录语音波形的集合; 2)用该语言编写的文本集合,无需访问任何抄录的语音。开发这种系统可以显着提高语言技术对语言的可用性,而无需大量平行的语音和文本数据。本文提出了一个基于对齐模块的无监督的TTS系统,该模块输出了伪文本和另一个使用伪文本进行训练和真实文本进行推理的合成模块。我们的无监督系统可以以七种语言的方式实现与监督系统相当的性能,每种语音约10-20小时。还对文本单元和声码器的效果进行了仔细的研究,以更好地了解哪些因素可能影响无监督的TTS性能。可以在https://cactuswiththoughts.github.io/unsuptts-demo上找到我们的模型生成的样品,可以在https://github.com/lwang114/unsuptts上找到我们的代码。
translated by 谷歌翻译
Post-training quantization (PTQ), which only requires a tiny dataset for calibration without end-to-end retraining, is a light and practical model compression technique. Recently, several PTQ schemes for vision transformers (ViTs) have been presented; unfortunately, they typically suffer from non-trivial accuracy degradation, especially in low-bit cases. In this paper, we propose RepQ-ViT, a novel PTQ framework for ViTs based on quantization scale reparameterization, to address the above issues. RepQ-ViT decouples the quantization and inference processes, where the former employs complex quantizers and the latter employs scale-reparameterized simplified quantizers. This ensures both accurate quantization and efficient inference, which distinguishes it from existing approaches that sacrifice quantization performance to meet the target hardware. More specifically, we focus on two components with extreme distributions: post-LayerNorm activations with severe inter-channel variation and post-Softmax activations with power-law features, and initially apply channel-wise quantization and log$\sqrt{2}$ quantization, respectively. Then, we reparameterize the scales to hardware-friendly layer-wise quantization and log2 quantization for inference, with only slight accuracy or computational costs. Extensive experiments are conducted on multiple vision tasks with different model variants, proving that RepQ-ViT, without hyperparameters and expensive reconstruction procedures, can outperform existing strong baselines and encouragingly improve the accuracy of 4-bit PTQ of ViTs to a usable level.
translated by 谷歌翻译
With the demand for standardized large-scale livestock farming and the development of artificial intelligence technology, a lot of research in area of animal face recognition were carried on pigs, cattle, sheep and other livestock. Face recognition consists of three sub-task: face detection, face normalizing and face identification. Most of animal face recognition study focuses on face detection and face identification. Animals are often uncooperative when taking photos, so the collected animal face images are often in arbitrary directions. The use of non-standard images may significantly reduce the performance of face recognition system. However, there is no study on normalizing of the animal face image with arbitrary directions. In this study, we developed a light-weight angle detection and region-based convolutional network (LAD-RCNN) containing a new rotation angle coding method that can detect the rotation angle and the location of animal face in one-stage. LAD-RCNN has a frame rate of 72.74 FPS (including all steps) on a single GeForce RTX 2080 Ti GPU. LAD-RCNN has been evaluated on multiple dataset including goat dataset and gaot infrared image. Evaluation result show that the AP of face detection was more than 95% and the deviation between the detected rotation angle and the ground-truth rotation angle were less than 0.036 (i.e. 6.48{\deg}) on all the test dataset. This shows that LAD-RCNN has excellent performance on livestock face and its direction detection, and therefore it is very suitable for livestock face detection and Normalizing. Code is available at https://github.com/SheepBreedingLab-HZAU/LAD-RCNN/
translated by 谷歌翻译
无数据量化可以潜在地解决模型压缩中的数据隐私和安全问题,因此已得到广泛研究。最近,PSAQ-VIT设计了一个相对值度量,贴片相似性,以生成预训练视觉变压器(VIT)的数据,从而实现了VIT的第一次无数据量化尝试。在本文中,我们提出了PSAQ-VIT V2,这是在PSAQ-VIT之上建立的更准确,无数据的VIT的更准确和无数据的量化框架。更具体地说,按照PSAQ-VIT中的贴片相似性度量,我们引入了一种自适应的教师学生策略,该策略促进了生成的样品的持续环节演变和量化的模型(学生),并在竞争性和互动方式下以竞争性和互动方式进行。完整的模型(教师),因此显着提高了量化模型的准确性。此外,没有辅助类别指导,我们采用了任务和模型独立的先验信息,使通用方案与广泛的视觉任务和模型兼容。对图像分类,对象检测和语义分割任务和PSAQ-VIT V2进行了各种模型进行了广泛的实验,并具有幼稚的量化策略,并且没有访问现实世界数据,从而始终取得了竞争性的结果,显示出潜力作为强大的基线的潜力关于VIT的无数据量化。例如,使用SWIN-S作为(骨干)模型,8位量化达到ImageNet上的82.13 TOP-1精度,50.9盒AP和可可的44.1 Mask AP,而ADE20K上的47.2 miOU。我们希望准确,一般的PSAQ-VIT V2可以作为涉及敏感数据的现实应用程序中的潜在和实践解决方案。代码将在以下网址发布并合并:https://github.com/zkkli/psaq-vit。
translated by 谷歌翻译
视觉变压器最近在各种计算机视觉任务上取得了巨大成功。然而,他们的高模型复杂性使部署在资源约束设备上的挑战。量化是一种有效的方法,可以减少模型复杂性,并且可以在模型部署期间解决数据隐私和安全问题的无数据量化已获得广泛的兴趣。不幸的是,所有现有的方法(例如BN正则化)都是为卷积神经网络而设计的,不能应用于具有明显不同模型体系结构的视觉变压器。在本文中,我们提出了PSAQ-VIT,这是视觉变压器的贴片相似性无数据量化框架,以根据视觉变压器的唯一属性来生成“现实”样品,以校准量化参数。具体而言,我们分析了自我发场模块的特性,并在处理高斯噪声和真实图像的处理中揭示了一般差异(斑块相似性)。以上见解指导我们设计一个相对值度量,以优化高斯噪声以近似真实的图像,然后将其用于校准量化参数。对各种基准进行了广泛的实验和消融研究,以验证PSAQ-VIT的有效性,这甚至可以优于实现DATA驱动的方法。
translated by 谷歌翻译
A storyboard is a roadmap for video creation which consists of shot-by-shot images to visualize key plots in a text synopsis. Creating video storyboards however remains challenging which not only requires association between high-level texts and images, but also demands for long-term reasoning to make transitions smooth across shots. In this paper, we propose a new task called Text synopsis to Video Storyboard (TeViS) which aims to retrieve an ordered sequence of images to visualize the text synopsis. We construct a MovieNet-TeViS benchmark based on the public MovieNet dataset. It contains 10K text synopses each paired with keyframes that are manually selected from corresponding movies by considering both relevance and cinematic coherence. We also present an encoder-decoder baseline for the task. The model uses a pretrained vision-and-language model to improve high-level text-image matching. To improve coherence in long-term shots, we further propose to pre-train the decoder on large-scale movie frames without text. Experimental results demonstrate that our proposed model significantly outperforms other models to create text-relevant and coherent storyboards. Nevertheless, there is still a large gap compared to human performance suggesting room for promising future work.
translated by 谷歌翻译
Cooperative multi-agent reinforcement learning (c-MARL) is widely applied in safety-critical scenarios, thus the analysis of robustness for c-MARL models is profoundly important. However, robustness certification for c-MARLs has not yet been explored in the community. In this paper, we propose a novel certification method, which is the first work to leverage a scalable approach for c-MARLs to determine actions with guaranteed certified bounds. c-MARL certification poses two key challenges compared with single-agent systems: (i) the accumulated uncertainty as the number of agents increases; (ii) the potential lack of impact when changing the action of a single agent into a global team reward. These challenges prevent us from directly using existing algorithms. Hence, we employ the false discovery rate (FDR) controlling procedure considering the importance of each agent to certify per-state robustness and propose a tree-search-based algorithm to find a lower bound of the global reward under the minimal certified perturbation. As our method is general, it can also be applied in single-agent environments. We empirically show that our certification bounds are much tighter than state-of-the-art RL certification solutions. We also run experiments on two popular c-MARL algorithms: QMIX and VDN, in two different environments, with two and four agents. The experimental results show that our method produces meaningful guaranteed robustness for all models and environments. Our tool CertifyCMARL is available at https://github.com/TrustAI/CertifyCMA
translated by 谷歌翻译
We present Hybrid Infused Reranking for Passages Retrieval (HYRR), a framework for training rerankers based on a hybrid of BM25 and neural retrieval models. Retrievers based on hybrid models have been shown to outperform both BM25 and neural models alone. Our approach exploits this improved performance when training a reranker, leading to a robust reranking model. The reranker, a cross-attention neural model, is shown to be robust to different first-stage retrieval systems, achieving better performance than rerankers simply trained upon the first-stage retrievers in the multi-stage systems. We present evaluations on a supervised passage retrieval task using MS MARCO and zero-shot retrieval tasks using BEIR. The empirical results show strong performance on both evaluations.
translated by 谷歌翻译
Drug-Drug Interactions (DDIs) prediction is an essential issue in the molecular field. Traditional methods of observing DDIs in medical experiments require plenty of resources and labor. In this paper, we present a computational model dubbed MedKGQA based on Graph Neural Networks to automatically predict the DDIs after reading multiple medical documents in the form of multi-hop machine reading comprehension. We introduced a knowledge fusion system to obtain the complete nature of drugs and proteins and exploited a graph reasoning system to infer the drugs and proteins contained in the documents. Our model significantly improves the performance compared to previous state-of-the-art models on the QANGAROO MedHop dataset, which obtained a 4.5% improvement in terms of DDIs prediction accuracy.
translated by 谷歌翻译