识别和理解文本中的潜在情绪或情绪是多种自然语言处理应用程序的关键组成部分。虽然简单的极性情感分析是一个良好研究的主题,但在识别使用文本数据的更复杂,更精细的情绪方面取得了更少的进步。在本文中,我们介绍了一种基于变压器的模型,具有适配器层的融合,它利用更简单的情绪分析任务来改善大规模数据集(例如CMU-MOSEI)上的情绪检测任务,仅使用文本方式。结果表明,我们的建议方法与其他方法具有竞争力。即使使用仅使用文本方式,我们也能为CMU-MOSEI的情感识别获得最先进的结果。
translated by 谷歌翻译
Multimodal learning pipelines have benefited from the success of pretrained language models. However, this comes at the cost of increased model parameters. In this work, we propose Adapted Multimodal BERT (AMB), a BERT-based architecture for multimodal tasks that uses a combination of adapter modules and intermediate fusion layers. The adapter adjusts the pretrained language model for the task at hand, while the fusion layers perform task-specific, layer-wise fusion of audio-visual information with textual BERT representations. During the adaptation process the pre-trained language model parameters remain frozen, allowing for fast, parameter-efficient training. In our ablations we see that this approach leads to efficient models, that can outperform their fine-tuned counterparts and are robust to input noise. Our experiments on sentiment analysis with CMU-MOSEI show that AMB outperforms the current state-of-the-art across metrics, with 3.4% relative reduction in the resulting error and 2.1% relative improvement in 7-class classification accuracy.
translated by 谷歌翻译
转移学习已通过深度审慎的语言模型广泛用于自然语言处理,例如来自变形金刚和通用句子编码器的双向编码器表示。尽管取得了巨大的成功,但语言模型应用于小型数据集时会过多地适合,并且很容易忘记与分类器进行微调时。为了解决这个忘记将深入的语言模型从一个域转移到另一个领域的问题,现有的努力探索了微调方法,以减少忘记。我们建议DeepeMotex是一种有效的顺序转移学习方法,以检测文本中的情绪。为了避免忘记问题,通过从Twitter收集的大量情绪标记的数据来仪器进行微调步骤。我们使用策划的Twitter数据集和基准数据集进行了一项实验研究。 DeepeMotex模型在测试数据集上实现多级情绪分类的精度超过91%。我们评估了微调DeepeMotex模型在分类Emoint和刺激基准数据集中的情绪时的性能。这些模型在基准数据集中的73%的实例中正确分类了情绪。所提出的DeepeMotex-Bert模型优于BI-LSTM在基准数据集上的BI-LSTM增长23%。我们还研究了微调数据集的大小对模型准确性的影响。我们的评估结果表明,通过大量情绪标记的数据进行微调提高了最终目标任务模型的鲁棒性和有效性。
translated by 谷歌翻译
对比的学习技术已广泛用于计算机视野中作为增强数据集的手段。在本文中,我们将这些对比学习嵌入的使用扩展到情绪分析任务,并证明了对这些嵌入的微调在基于BERT的嵌入物上的微调方面提供了改进,以在评估时实现更高的基准。在Dynasent DataSet上。我们还探讨了我们的微调模型在跨域基准数据集上执行的。此外,我们探索了ups采样技术,以实现更平衡的班级分发,以进一步改进我们的基准任务。
translated by 谷歌翻译
In recent years, there has been increased interest in building predictive models that harness natural language processing and machine learning techniques to detect emotions from various text sources, including social media posts, micro-blogs or news articles. Yet, deployment of such models in real-world sentiment and emotion applications faces challenges, in particular poor out-of-domain generalizability. This is likely due to domain-specific differences (e.g., topics, communicative goals, and annotation schemes) that make transfer between different models of emotion recognition difficult. In this work we propose approaches for text-based emotion detection that leverage transformer models (BERT and RoBERTa) in combination with Bidirectional Long Short-Term Memory (BiLSTM) networks trained on a comprehensive set of psycholinguistic features. First, we evaluate the performance of our models within-domain on two benchmark datasets: GoEmotion and ISEAR. Second, we conduct transfer learning experiments on six datasets from the Unified Emotion Dataset to evaluate their out-of-domain robustness. We find that the proposed hybrid models improve the ability to generalize to out-of-distribution data compared to a standard transformer-based approach. Moreover, we observe that these models perform competitively on in-domain data.
translated by 谷歌翻译
在Twitter数据中表达的情绪的自动识别具有广泛的应用。我们通过将中性类添加到一个由四种情绪组成的基准数据集中添加中性类来创建一个均衡的数据集:恐惧,悲伤,喜悦和愤怒。在此扩展数据集上,我们研究了来自变压器(BERT)的支持向量机(SVM)和双向编码器表示情感识别的使用。我们通过组合两个BERT和SVM模型来提出一种新颖的合奏模型。实验表明,所提出的模型在推文中的情绪识别方面达到了0.91的最新精度。
translated by 谷歌翻译
本文研究了一个特定CL设置中的一系列方面情绪分类(ASC)任务的持续学习(CL),称为域增量学习(DIL)。每个任务都来自不同的域或产品。DIL设置特别适合ASC,因为在测试中,系统不需要知道测试数据所属的任务/域。据我们所知,此环境尚未在ASC之前进行过研究。本文提出了一种名为CLASSIC的新型模型。关键新颖性是一种对比的持续学习方法,可以通过从旧任务到新任务的任务和知识蒸馏的知识转移,这消除了对测试中的任务ID的需求。实验结果表明了经典的高效性。
translated by 谷歌翻译
人类通过不同的渠道表达感受或情绪。以语言为例,它在不同的视觉声学上下文下需要不同的情绪。为了精确了解人类意图,并减少歧义和讽刺引起的误解,我们应该考虑多式联路信号,包括文本,视觉和声学信号。至关重要的挑战是融合不同的特征模式以进行情绪分析。为了有效地融合不同的方式携带的信息,更好地预测情绪,我们设计了一种基于新的多主题的融合网络,这是由任何两个对方式之间的相互作用不同的观察来启发,它们是不同的,并且它们不同样有助于最终的情绪预测。通过分配具有合理关注和利用残余结构的声学 - 视觉,声学 - 文本和视觉文本特征,我们参加了重要的特征。我们对四个公共多模式数据集进行了广泛的实验,包括中文和三种英文中的一个。结果表明,我们的方法优于现有的方法,并可以解释双模相互作用在多种模式中的贡献。
translated by 谷歌翻译
Fine-tuning large pre-trained models is an effective transfer mechanism in NLP. However, in the presence of many downstream tasks, fine-tuning is parameter inefficient: an entire new model is required for every task. As an alternative, we propose transfer with adapter modules. Adapter modules yield a compact and extensible model; they add only a few trainable parameters per task, and new tasks can be added without revisiting previous ones. The parameters of the original network remain fixed, yielding a high degree of parameter sharing. To demonstrate adapter's effectiveness, we transfer the recently proposed BERT Transformer model to 26 diverse text classification tasks, including the GLUE benchmark. Adapters attain near state-of-the-art performance, whilst adding only a few parameters per task. On GLUE, we attain within 0.4% of the performance of full fine-tuning, adding only 3.6% parameters per task. By contrast, fine-tuning trains 100% of the parameters per task.
translated by 谷歌翻译
Social networking sites, blogs, and online articles are instant sources of news for internet users globally. However, in the absence of strict regulations mandating the genuineness of every text on social media, it is probable that some of these texts are fake news or rumours. Their deceptive nature and ability to propagate instantly can have an adverse effect on society. This necessitates the need for more effective detection of fake news and rumours on the web. In this work, we annotate four fake news detection and rumour detection datasets with their emotion class labels using transfer learning. We show the correlation between the legitimacy of a text with its intrinsic emotion for fake news and rumour detection, and prove that even within the same emotion class, fake and real news are often represented differently, which can be used for improved feature extraction. Based on this, we propose a multi-task framework for fake news and rumour detection, predicting both the emotion and legitimacy of the text. We train a variety of deep learning models in single-task and multi-task settings for a more comprehensive comparison. We further analyze the performance of our multi-task approach for fake news detection in cross-domain settings to verify its efficacy for better generalization across datasets, and to verify that emotions act as a domain-independent feature. Experimental results verify that our multi-task models consistently outperform their single-task counterparts in terms of accuracy, precision, recall, and F1 score, both for in-domain and cross-domain settings. We also qualitatively analyze the difference in performance in single-task and multi-task learning models.
translated by 谷歌翻译
In this paper, we address the problem of multimodal emotion recognition from multiple physiological signals. We demonstrate that a Transformer-based approach is suitable for this task. In addition, we present how such models may be pretrained in a multimodal scenario to improve emotion recognition performances. We evaluate the benefits of using multimodal inputs and pre-training with our approach on a state-ofthe-art dataset.
translated by 谷歌翻译
具有数百万参数的基于变压器的预训练模型需要大量存储。最近的方法通过培训适配器解决了这一缺点,但是这些方法仍然需要相对较大的参数。在这项研究中,提出了一种令人惊讶的简单但有效的适配器体系结构的Adapterbias。AdapterBias向变压器层的隐藏输出添加了代币依赖性转移,以适应仅使用向量和线性层的下游任务。进行了广泛的实验,以证明适配性的有效性。实验表明,与先前的作品相比,我们提出的方法可以大大减少可训练的参数,而任务性能与微调的预训练模型相比最小。我们进一步发现,适应性比亚斯自动学习以将更重要的表示形式分配给与任务相关的代币转移。
translated by 谷歌翻译
任务嵌入式是培训以捕获任务属性的低维表示。在本文中,我们提出了Metaeval,一系列101美元的NLP任务。我们将单个变压器与所有Metaeval任务联系在一起,同时将其调制到学习嵌入。由此产生的任务嵌入物能够对任务空间进行新颖分析。然后,我们显示任务方面可以映射到任务嵌入式的任务嵌入式,以用于新任务,而无需使用任何注释的示例。预测嵌入物可以调制编码器以进行零拍摄推断,并优于胶水任务上的零拍摄基线。提供的多任务设置可以用作未来转移学习研究的基准。
translated by 谷歌翻译
本文介绍了关于剧透筛选的研究。在这种用例中,我们描述了微调和组织基于文本的模型任务的方法,并具有最新的深度学习成果和技术来解释模型的结果。到目前为止,文献中的剧透研究很少描述。我们在带有带注释的扰流板(ROC AUC以上的TV Tropes Point DataSet上超过81 \%的Roc Auc以上的Roc Auc上超过81 \%)的转移学习方法和不同的最新变压器架构。我们还收集了数据并使用细粒度注释组装了新数据集。为此,我们采用了可解释技术和措施来评估模型的可靠性并解释其结果。
translated by 谷歌翻译
长尾分布式数据的分类是一个具有挑战性的问题,它遭受了严重的班级不平衡,因此只有几个样本的尾巴阶级表现不佳。由于样本的匮乏,在将预审计的模型转移到下游任务时,在尾部类中学习对于微调尤其具有挑战性。在这项工作中,我们简单地修改了标准微调,以应对这些挑战。具体而言,我们提出了一个两阶段的微调:我们首先用类平衡的重新释放损失微调了预审计模型的最后一层,然后我们执行标准的微调。我们的修改有几个好处:(1)仅通过微调模型参数的一小部分,同时保持其余部分未触及,从而利用了预告片; (2)它允许模型学习特定任务的初始表示;重要的是(3)它可以保护学习尾巴的学习免于模型更新期间处于不利地位。我们对文本分类的两类和多级任务的合成数据集进行了广泛的实验,以及用于ADME的现实世界应用(即吸收,分布,代谢和排泄)语义标记。实验结果表明,所提出的两阶段微调既优于传统损失,又超过了微调,并且在上述数据集上进行了重新调整损失。
translated by 谷歌翻译
本文研究了一系列方面情绪分类(ASC)任务的持续学习(CL)。虽然已经提出了一些CL技术进行了文档情绪分类,但我们不知道任何CL在ASC上工作。逐步学习一系列ASC任务的CL系统应该解决以下两个问题:(1)将从以前任务的传输知识从以前的任务中学到的新任务,以帮助它学习更好的模型,并且(2)保持模型的性能以前的任务让他们没有忘记。本文提出了一种新颖的基于胶囊网络的模型,称为B-CL以解决这些问题。B-CL通过前向和后向知识传输显着提高了新任务和旧任务的ASC性能。通过广泛的实验证明了B-CL的有效性。
translated by 谷歌翻译
谈话中的情感认可(ERC)是一个重要而积极的研究问题。最近的工作表明了ERC任务使用多种方式(例如,文本,音频和视频)的好处。在谈话中,除非一些外部刺激唤起改变,否则参与者倾向于维持特定的情绪状态。在谈话中持续的潮起潮落和情绪流动。灵感来自这种观察,我们提出了一种多模式ERC模型,并通过情感转换组件增强。所提出的情感移位组件是模块化的,可以添加到任何现有的多模式ERC模型(具有几种修改),以改善情绪识别。我们尝试模型的不同变体,结果表明,包含情感移位信号有助于模型以优于ERC的现有多模型模型,从而展示了MOSEI和IEMOCAP数据集的最先进的性能。
translated by 谷歌翻译
Language model pre-training has proven to be useful in learning universal language representations. As a state-of-the-art language model pre-training model, BERT (Bidirectional Encoder Representations from Transformers) has achieved amazing results in many language understanding tasks. In this paper, we conduct exhaustive experiments to investigate different fine-tuning methods of BERT on text classification task and provide a general solution for BERT fine-tuning. Finally, the proposed solution obtains new state-of-the-art results on eight widely-studied text classification datasets. 1
translated by 谷歌翻译
多模式情绪识别的研究和应用最近变得越来越流行。但是,多模式情绪识别面临缺乏数据的挑战。为了解决这个问题,我们建议使用转移学习,哪些人利用最先进的预培训模型,包括WAV2VEC 2.0和BERT来执行此任务。探索了多级融合方法,包括基于共发的早期融合和与在两个嵌入训练的模型的后期融合。此外,还提出了一个多范围的框架,它不仅提取了帧级的语音嵌入,还提出了细分级别的嵌入,包括电话,音节和文字级语音嵌入,以进一步提高性能。通过将基于同时的早期融合模型和晚期融合模型与多粒性特征提取框架相结合,我们获得的结果使IEMOCAP数据集上的最佳基线方法优于最佳基线方法未加权准确性(UA)。
translated by 谷歌翻译
The health mention classification (HMC) task is the process of identifying and classifying mentions of health-related concepts in text. This can be useful for identifying and tracking the spread of diseases through social media posts. However, this is a non-trivial task. Here we build on recent studies suggesting that using emotional information may improve upon this task. Our study results in a framework for health mention classification that incorporates affective features. We present two methods, an intermediate task fine-tuning approach (implicit) and a multi-feature fusion approach (explicit) to incorporate emotions into our target task of HMC. We evaluated our approach on 5 HMC-related datasets from different social media platforms including three from Twitter, one from Reddit and another from a combination of social media sources. Extensive experiments demonstrate that our approach results in statistically significant performance gains on HMC tasks. By using the multi-feature fusion approach, we achieve at least a 3% improvement in F1 score over BERT baselines across all datasets. We also show that considering only negative emotions does not significantly affect performance on the HMC task. Additionally, our results indicate that HMC models infused with emotional knowledge are an effective alternative, especially when other HMC datasets are unavailable for domain-specific fine-tuning. The source code for our models is freely available at https://github.com/tahirlanre/Emotion_PHM.
translated by 谷歌翻译