学习高级语音表征的自学学习(SSL)一直是在低资源环境中构建自动语音识别(ASR)系统的一种流行方法。但是,文献中提出的共同假设是,可以使用可用于SSL预训练的相同域或语言的大量未标记数据,我们承认,在现实世界中,这是不可行的。在本文中,作为Interspeech Gram Vaani ASR挑战的一部分,我们尝试研究域,语言,数据集大小和上游训练SSL数据对最终性能下游ASR任务的效果。我们还建立在持续的训练范式的基础上,以研究使用SSL训练的模型所拥有的先验知识的效果。广泛的实验和研究表明,ASR系统的性能易受用于SSL预训练的数据。它们的性能随着相似性和预训练数据量的增加而提高。我们认为,我们的工作将有助于语音社区在低资源环境中建立更好的ASR系统,并引导研究改善基于SSL的语音系统预培训的概括。
translated by 谷歌翻译
虽然自我监督的语音表示学习(SSL)模型执行了各种下游任务,但已经观察到这些模型过于拟合未标记数据来源的域。为了减轻此问题,我们提出了PADA(修剪辅助域的适应性),并在大量室外(OOD)数据上进行预训练的模型中的冗余权重。直观地,这有助于为目标域ASR芬太尼腾出空间。可以通过各种修剪策略来识别多余的权重,这些策略已作为本工作的一部分进行了详细讨论。具体而言,我们研究了最近发现的任务不合时宜的和任务感知的修剪对PADA的效果,并根据后者提出了一个新的修剪范式,我们称之为跨域任务意识到的修剪(CD-TAW)。 CD-TAW从精心调整的OOD模型中获得了初始修剪面膜,这使其与本文讨论的其余修剪策略完全不同。当在没有语言模型(LM)解码的2小时子集中进行微调时,我们提出的CD-TAW方法比基线相对相对改善高达20.6%。此外,我们进行了详细的分析,以突出提出的方法的关键设计选择。
translated by 谷歌翻译
由于不规则的病变界限,病变与背景之间的对比度较差,以及伪影之间的对比度,皮肤病的自动分割是一种具有挑战性的任务。在这项工作中,提出了一种新的卷积神经网络的方法,用于皮肤病变分割。在这项工作中,提出了一种新型多尺度特征提取模块,用于提取更多辨别特征,以处理与复杂的皮肤病变有关的挑战;该模块嵌入在UNET中,替换标准架构中的卷积层。此外,在这项工作中,两个不同的关注机制完善了编码器提取的特征和后ups采样的特征。使用两个公开的数据集进行评估,包括ISBI2017和ISIC2018数据集。该方法报告了ISBI2017数据集中的准确性,召回和JSI,97.5%,94.29%,91.16%,95.92%,95.92%,95.37%,95.37%,91.52%在ISIC2018数据集。它在各个竞争中表现出现有的方法和排名的模型。
translated by 谷歌翻译
Three main points: 1. Data Science (DS) will be increasingly important to heliophysics; 2. Methods of heliophysics science discovery will continually evolve, requiring the use of learning technologies [e.g., machine learning (ML)] that are applied rigorously and that are capable of supporting discovery; and 3. To grow with the pace of data, technology, and workforce changes, heliophysics requires a new approach to the representation of knowledge.
translated by 谷歌翻译
In the Earth's magnetosphere, there are fewer than a dozen dedicated probes beyond low-Earth orbit making in-situ observations at any given time. As a result, we poorly understand its global structure and evolution, the mechanisms of its main activity processes, magnetic storms, and substorms. New Artificial Intelligence (AI) methods, including machine learning, data mining, and data assimilation, as well as new AI-enabled missions will need to be developed to meet this Sparse Data challenge.
translated by 谷歌翻译
The availability of frequent and cost-free satellite images is in growing demand in the research world. Such satellite constellations as Landsat 8 and Sentinel-2 provide a massive amount of valuable data daily. However, the discrepancy in the sensors' characteristics of these satellites makes it senseless to use a segmentation model trained on either dataset and applied to another, which is why domain adaptation techniques have recently become an active research area in remote sensing. In this paper, an experiment of domain adaptation through style-transferring is conducted using the HRSemI2I model to narrow the sensor discrepancy between Landsat 8 and Sentinel-2. This paper's main contribution is analyzing the expediency of that approach by comparing the results of segmentation using domain-adapted images with those without adaptation. The HRSemI2I model, adjusted to work with 6-band imagery, shows significant intersection-over-union performance improvement for both mean and per class metrics. A second contribution is providing different schemes of generalization between two label schemes - NALCMS 2015 and CORINE. The first scheme is standardization through higher-level land cover classes, and the second is through harmonization validation in the field.
translated by 谷歌翻译
Self-training (ST) has prospered again in language understanding by augmenting the fine-tuning of pre-trained language models when labeled data is insufficient. However, it remains challenging to incorporate ST into attribute-controllable language generation. Augmented by only self-generated pseudo text, generation models over-emphasize exploitation of the previously learned space, suffering from a constrained generalization boundary. We revisit ST and propose a novel method, DuNST to alleviate this problem. DuNST jointly models text generation and classification with a shared Variational AutoEncoder and corrupts the generated pseudo text by two kinds of flexible noise to disturb the space. In this way, our model could construct and utilize both pseudo text from given labels and pseudo labels from available unlabeled text, which are gradually refined during the ST process. We theoretically demonstrate that DuNST can be regarded as enhancing exploration towards the potential real text space, providing a guarantee of improved performance. Experiments on three controllable generation tasks show that DuNST could significantly boost control accuracy while maintaining comparable generation fluency and diversity against several strong baselines.
translated by 谷歌翻译
Of late, insurance fraud detection has assumed immense significance owing to the huge financial & reputational losses fraud entails and the phenomenal success of the fraud detection techniques. Insurance is majorly divided into two categories: (i) Life and (ii) Non-life. Non-life insurance in turn includes health insurance and auto insurance among other things. In either of the categories, the fraud detection techniques should be designed in such a way that they capture as many fraudulent transactions as possible. Owing to the rarity of fraudulent transactions, in this paper, we propose a chaotic variational autoencoder (C-VAE to perform one-class classification (OCC) on genuine transactions. Here, we employed the logistic chaotic map to generate random noise in the latent space. The effectiveness of C-VAE is demonstrated on the health insurance fraud and auto insurance datasets. We considered vanilla Variational Auto Encoder (VAE) as the baseline. It is observed that C-VAE outperformed VAE in both datasets. C-VAE achieved a classification rate of 77.9% and 87.25% in health and automobile insurance datasets respectively. Further, the t-test conducted at 1% level of significance and 18 degrees of freedom infers that C-VAE is statistically significant than the VAE.
translated by 谷歌翻译
Although machine learning based algorithms have been extensively used for detecting phishing websites, there has been relatively little work on how adversaries may attack such "phishing detectors" (PDs for short). In this paper, we propose a set of Gray-Box attacks on PDs that an adversary may use which vary depending on the knowledge that he has about the PD. We show that these attacks severely degrade the effectiveness of several existing PDs. We then propose the concept of operation chains that iteratively map an original set of features to a new set of features and develop the "Protective Operation Chain" (POC for short) algorithm. POC leverages the combination of random feature selection and feature mappings in order to increase the attacker's uncertainty about the target PD. Using 3 existing publicly available datasets plus a fourth that we have created and will release upon the publication of this paper, we show that POC is more robust to these attacks than past competing work, while preserving predictive performance when no adversarial attacks are present. Moreover, POC is robust to attacks on 13 different classifiers, not just one. These results are shown to be statistically significant at the p < 0.001 level.
translated by 谷歌翻译
The findable, accessible, interoperable, and reusable (FAIR) data principles have provided a framework for examining, evaluating, and improving how we share data with the aim of facilitating scientific discovery. Efforts have been made to generalize these principles to research software and other digital products. Artificial intelligence (AI) models -- algorithms that have been trained on data rather than explicitly programmed -- are an important target for this because of the ever-increasing pace with which AI is transforming scientific and engineering domains. In this paper, we propose a practical definition of FAIR principles for AI models and create a FAIR AI project template that promotes adherence to these principles. We demonstrate how to implement these principles using a concrete example from experimental high energy physics: a graph neural network for identifying Higgs bosons decaying to bottom quarks. We study the robustness of these FAIR AI models and their portability across hardware architectures and software frameworks, and report new insights on the interpretability of AI predictions by studying the interplay between FAIR datasets and AI models. Enabled by publishing FAIR AI models, these studies pave the way toward reliable and automated AI-driven scientific discovery.
translated by 谷歌翻译