情感语音综合旨在使人类的声音具有各种情感影响。当前的研究主要集中于模仿属于特定情感类型的平均风格。在本文中,我们试图在运行时与情感混合在一起。我们提出了一种新颖的表述,可以衡量不同情绪的语音样本之间的相对差异。然后,我们将公式纳入序列到序列情感文本到语音框架中。在培训期间,该框架不仅明确地表征了情感风格,而且还通过用其他情感量化差异来探索情绪的序数。在运行时,我们通过手动定义情感属性向量来控制模型以产生所需的情绪混合物。客观和主观评估验证了拟议框架的有效性。据我们所知,这项研究是关于言语中混合情绪的建模,综合和评估混合情绪的第一项研究。
translated by 谷歌翻译
爆发两年多后,Covid-19的大流行继续困扰世界各地的医疗系统,给稀缺资源带来压力,并夺走了人类的生命。从一开始,已经采用了各种基于AI的CoVID-19检测和监测工具,以试图通过及时诊断来阻止感染的潮流。特别是,已经建议计算机试听是一种非侵入性,成本效益和环保的替代方法,可通过声音通过声音来检测COVID-19的感染。但是,像所有AI方法一样,计算机试镜也很大程度上取决于可用数据的数量和质量,并且由于此类数据的敏感性,大规模的COVID-19声音数据集很难获取 - 除其他原因外。为此,我们介绍了COVYT数据集 - 一种新颖的Covid-19数据集,该数据集是从包含来自65位演讲者的8个小时以上语音的公共资源中收集的。与其他现有的COVID-19声音数据集相比,COVYT数据集的独特功能是,它包括所有65位扬声器的covid-19正和负样本。我们使用可解释的音频描述来分析Covid-19的声学表现,并使用可解释的音频描述,并研究几种分类场景,并调查一些分类场景,以将基于公平的言语的COVID进行适当的分配策略-19检测。
translated by 谷歌翻译
Charisma is considered as one's ability to attract and potentially also influence others. Clearly, there can be considerable interest from an artificial intelligence's (AI) perspective to provide it with such skill. Beyond, a plethora of use cases opens up for computational measurement of human charisma, such as for tutoring humans in the acquisition of charisma, mediating human-to-human conversation, or identifying charismatic individuals in big social data. A number of models exist that base charisma on various dimensions, often following the idea that charisma is given if someone could and would help others. Examples include influence (could help) and affability (would help) in scientific studies or power (could help), presence, and warmth (both would help) as a popular concept. Modelling high levels in these dimensions for humanoid robots or virtual agents, seems accomplishable. Beyond, also automatic measurement appears quite feasible with the recent advances in the related fields of Affective Computing and Social Signal Processing. Here, we, thereforem present a blueprint for building machines that can appear charismatic, but also analyse the charisma of others. To this end, we first provide the psychological perspective including different models of charisma and behavioural cues of it. We then switch to conversational charisma in spoken language as an exemplary modality that is essential for human-human and human-computer conversations. The computational perspective then deals with the recognition and generation of charismatic behaviour by AI. This includes an overview of the state of play in the field and the aforementioned blueprint. We then name exemplary use cases of computational charismatic skills before switching to ethical aspects and concluding this overview and perspective on building charisma-enabled AI.
translated by 谷歌翻译
Telling stories is an integral part of human communication which can evoke emotions and influence the affective states of the audience. Automatically modelling emotional trajectories in stories has thus attracted considerable scholarly interest. However, as most existing works have been limited to unsupervised dictionary-based approaches, there is no labelled benchmark for this task. We address this gap by introducing continuous valence and arousal annotations for an existing dataset of children's stories annotated with discrete emotion categories. We collect additional annotations for this data and map the originally categorical labels to the valence and arousal space. Leveraging recent advances in Natural Language Processing, we propose a set of novel Transformer-based methods for predicting valence and arousal signals over the course of written stories. We explore several strategies for fine-tuning a pretrained ELECTRA model and study the benefits of considering a sentence's context when inferring its emotionality. Moreover, we experiment with additional LSTM and Transformer layers. The best configuration achieves a Concordance Correlation Coefficient (CCC) of .7338 for valence and .6302 for arousal on the test set, demonstrating the suitability of our proposed approach. Our code and additional annotations are made available at https://github.com/lc0197/emotion_modelling_stories.
translated by 谷歌翻译
Recent work has reported that AI classifiers trained on audio recordings can accurately predict severe acute respiratory syndrome coronavirus 2 (SARSCoV2) infection status. Here, we undertake a large scale study of audio-based deep learning classifiers, as part of the UK governments pandemic response. We collect and analyse a dataset of audio recordings from 67,842 individuals with linked metadata, including reverse transcription polymerase chain reaction (PCR) test outcomes, of whom 23,514 tested positive for SARS CoV 2. Subjects were recruited via the UK governments National Health Service Test-and-Trace programme and the REal-time Assessment of Community Transmission (REACT) randomised surveillance survey. In an unadjusted analysis of our dataset AI classifiers predict SARS-CoV-2 infection status with high accuracy (Receiver Operating Characteristic Area Under the Curve (ROCAUC) 0.846 [0.838, 0.854]) consistent with the findings of previous studies. However, after matching on measured confounders, such as age, gender, and self reported symptoms, our classifiers performance is much weaker (ROC-AUC 0.619 [0.594, 0.644]). Upon quantifying the utility of audio based classifiers in practical settings, we find them to be outperformed by simple predictive scores based on user reported symptoms.
translated by 谷歌翻译
Since early in the coronavirus disease 2019 (COVID-19) pandemic, there has been interest in using artificial intelligence methods to predict COVID-19 infection status based on vocal audio signals, for example cough recordings. However, existing studies have limitations in terms of data collection and of the assessment of the performances of the proposed predictive models. This paper rigorously assesses state-of-the-art machine learning techniques used to predict COVID-19 infection status based on vocal audio signals, using a dataset collected by the UK Health Security Agency. This dataset includes acoustic recordings and extensive study participant meta-data. We provide guidelines on testing the performance of methods to classify COVID-19 infection status based on acoustic features and we discuss how these can be extended more generally to the development and assessment of predictive methods based on public health datasets.
translated by 谷歌翻译
The UK COVID-19 Vocal Audio Dataset is designed for the training and evaluation of machine learning models that classify SARS-CoV-2 infection status or associated respiratory symptoms using vocal audio. The UK Health Security Agency recruited voluntary participants through the national Test and Trace programme and the REACT-1 survey in England from March 2021 to March 2022, during dominant transmission of the Alpha and Delta SARS-CoV-2 variants and some Omicron variant sublineages. Audio recordings of volitional coughs, exhalations, and speech were collected in the 'Speak up to help beat coronavirus' digital survey alongside demographic, self-reported symptom and respiratory condition data, and linked to SARS-CoV-2 test results. The UK COVID-19 Vocal Audio Dataset represents the largest collection of SARS-CoV-2 PCR-referenced audio recordings to date. PCR results were linked to 70,794 of 72,999 participants and 24,155 of 25,776 positive cases. Respiratory symptoms were reported by 45.62% of participants. This dataset has additional potential uses for bioacoustics research, with 11.30% participants reporting asthma, and 27.20% with linked influenza PCR test results.
translated by 谷歌翻译
Segmentation of regions of interest (ROIs) for identifying abnormalities is a leading problem in medical imaging. Using Machine Learning (ML) for this problem generally requires manually annotated ground-truth segmentations, demanding extensive time and resources from radiologists. This work presents a novel weakly supervised approach that utilizes binary image-level labels, which are much simpler to acquire, to effectively segment anomalies in medical Magnetic Resonance (MR) images without ground truth annotations. We train a binary classifier using these labels and use it to derive seeds indicating regions likely and unlikely to contain tumors. These seeds are used to train a generative adversarial network (GAN) that converts cancerous images to healthy variants, which are then used in conjunction with the seeds to train a ML model that generates effective segmentations. This method produces segmentations that achieve Dice coefficients of 0.7903, 0.7868, and 0.7712 on the MICCAI Brain Tumor Segmentation (BraTS) 2020 dataset for the training, validation, and test cohorts respectively. We also propose a weakly supervised means of filtering the segmentations, removing a small subset of poorer segmentations to acquire a large subset of high quality segmentations. The proposed filtering further improves the Dice coefficients to up to 0.8374, 0.8232, and 0.8136 for training, validation, and test, respectively.
translated by 谷歌翻译
Barlow Twins自制学习目标既不需要负样本或不对称的学习更新,从而与计算机视觉中当前最新艺术相提并论。因此,我们提出了音频Barlow双胞胎,这是一种新颖的自我监督音频表示方法,将Barlow Twins适应音频域。我们在大规模音频数据集音频集上预先培训,并评估来自2021年HEAR 2021挑战的18个任务的学习表现质量,从而取得了超越或以其他方式与当前最新的结果相同的结果。 - 例如,歧视自我监督的学习方法来表示音频表示学习。https://github.com/jonahanton/ssl_audio上的代码。
translated by 谷歌翻译
幽默是人类情感和认知的重要因素。它的自动理解可以促进更自然的人类设备互动和人工智能的人性化。当前的幽默检测方法仅基于分阶段数据,使其不适用于“现实世界”应用程序。我们通过引入新颖的Passau自发足球教练幽默(Passau-SFCH)数据集来解决这种缺陷,包括大约11个小时的录音。在马丁的幽默风格问卷中提出的幽默及其尺寸(情感和方向)的存在,请注释Passau-SFCH数据集。我们进行了一系列实验,采用了经过预定的变压器,卷积神经网络和专家设计的功能。分析了每种模式(文本,音频,视频)的表现,以进行自发幽默识别,并研究了它们的互补性。我们的发现表明,对于对幽默及其情感的自动分析,面部表情是最有希望的,而幽默方向可以通过基于文本的功能进行建模。结果揭示了各种主题之间的差异,突出了幽默用法和风格的个性。此外,我们观察到决策级融合会产生最佳认可结果。最后,我们在https://www.github.com/eihw/passau-sfch上公开代码。可以根据要求获得Passau-SFCH数据集。
translated by 谷歌翻译