In recent years, there has been increased interest in building predictive models that harness natural language processing and machine learning techniques to detect emotions from various text sources, including social media posts, micro-blogs or news articles. Yet, deployment of such models in real-world sentiment and emotion applications faces challenges, in particular poor out-of-domain generalizability. This is likely due to domain-specific differences (e.g., topics, communicative goals, and annotation schemes) that make transfer between different models of emotion recognition difficult. In this work we propose approaches for text-based emotion detection that leverage transformer models (BERT and RoBERTa) in combination with Bidirectional Long Short-Term Memory (BiLSTM) networks trained on a comprehensive set of psycholinguistic features. First, we evaluate the performance of our models within-domain on two benchmark datasets: GoEmotion and ISEAR. Second, we conduct transfer learning experiments on six datasets from the Unified Emotion Dataset to evaluate their out-of-domain robustness. We find that the proposed hybrid models improve the ability to generalize to out-of-distribution data compared to a standard transformer-based approach. Moreover, we observe that these models perform competitively on in-domain data.
translated by 谷歌翻译
随着社交媒体平台上的开放文本数据的最新扩散,在过去几年中,文本的情感检测(ED)受到了更多关注。它有许多应用程序,特别是对于企业和在线服务提供商,情感检测技术可以通过分析客户/用户对产品和服务的感受来帮助他们做出明智的商业决策。在这项研究中,我们介绍了Armanemo,这是一个标记为七个类别的7000多个波斯句子的人类标记的情感数据集。该数据集是从不同资源中收集的,包括Twitter,Instagram和Digikala(伊朗电子商务公司)的评论。标签是基于埃克曼(Ekman)的六种基本情感(愤怒,恐惧,幸福,仇恨,悲伤,奇迹)和另一个类别(其他),以考虑Ekman模型中未包含的任何其他情绪。除数据集外,我们还提供了几种基线模型,用于情绪分类,重点是最新的基于变压器的语言模型。我们的最佳模型在我们的测试数据集中达到了75.39%的宏观平均得分。此外,我们还进行了转移学习实验,以将我们提出的数据集的概括与其他波斯情绪数据集进行比较。这些实验的结果表明,我们的数据集在现有的波斯情绪数据集中具有较高的概括性。 Armanemo可在https://github.com/arman-rayan-sharif/arman-text-emotion上公开使用。
translated by 谷歌翻译
In recent years, there has been a surge of interest in research on automatic mental health detection (MHD) from social media data leveraging advances in natural language processing and machine learning techniques. While significant progress has been achieved in this interdisciplinary research area, the vast majority of work has treated MHD as a binary classification task. The multiclass classification setup is, however, essential if we are to uncover the subtle differences among the statistical patterns of language use associated with particular mental health conditions. Here, we report on experiments aimed at predicting six conditions (anxiety, attention deficit hyperactivity disorder, bipolar disorder, post-traumatic stress disorder, depression, and psychological stress) from Reddit social media posts. We explore and compare the performance of hybrid and ensemble models leveraging transformer-based architectures (BERT and RoBERTa) and BiLSTM neural networks trained on within-text distributions of a diverse set of linguistic features. This set encompasses measures of syntactic complexity, lexical sophistication and diversity, readability, and register-specific ngram frequencies, as well as sentiment and emotion lexicons. In addition, we conduct feature ablation experiments to investigate which types of features are most indicative of particular mental health conditions.
translated by 谷歌翻译
转移学习已通过深度审慎的语言模型广泛用于自然语言处理,例如来自变形金刚和通用句子编码器的双向编码器表示。尽管取得了巨大的成功,但语言模型应用于小型数据集时会过多地适合,并且很容易忘记与分类器进行微调时。为了解决这个忘记将深入的语言模型从一个域转移到另一个领域的问题,现有的努力探索了微调方法,以减少忘记。我们建议DeepeMotex是一种有效的顺序转移学习方法,以检测文本中的情绪。为了避免忘记问题,通过从Twitter收集的大量情绪标记的数据来仪器进行微调步骤。我们使用策划的Twitter数据集和基准数据集进行了一项实验研究。 DeepeMotex模型在测试数据集上实现多级情绪分类的精度超过91%。我们评估了微调DeepeMotex模型在分类Emoint和刺激基准数据集中的情绪时的性能。这些模型在基准数据集中的73%的实例中正确分类了情绪。所提出的DeepeMotex-Bert模型优于BI-LSTM在基准数据集上的BI-LSTM增长23%。我们还研究了微调数据集的大小对模型准确性的影响。我们的评估结果表明,通过大量情绪标记的数据进行微调提高了最终目标任务模型的鲁棒性和有效性。
translated by 谷歌翻译
讽刺可以被定义为说或写讽刺与一个人真正想表达的相反,通常是为了侮辱,刺激或娱乐某人。由于文本数据中讽刺性的性质晦涩难懂,因此检测到情感分析研究社区的困难和非常感兴趣。尽管讽刺检测的研究跨越了十多年,但最近已经取得了一些重大进步,包括在多模式环境中采用了无监督的预训练的预训练的变压器,并整合了环境以识别讽刺。在这项研究中,我们旨在简要概述英语计算讽刺研究的最新进步和趋势。我们描述了与讽刺有关的相关数据集,方法,趋势,问题,挑战和任务,这些数据集,趋势,问题,挑战和任务是无法检测到的。我们的研究提供了讽刺数据集,讽刺特征及其提取方法以及各种方法的性能分析,这些表可以帮助相关领域的研究人员了解当前的讽刺检测中最新实践。
translated by 谷歌翻译
人们的行为和反应是由他们的情绪驱动的。在线社交媒体正在成为以书面形式表达情感的绝佳工具。注意上下文和整个句子,帮助我们从文本中检测到情感。但是,这种观点抑制了我们注意文本中的一些情感单词或短语,尤其是当单词隐含地而不是明确地表达情感时。另一方面,仅关注单词并忽略上下文会导致对句子含义和感觉的扭曲理解。在本文中,我们提出了一个框架,该框架分析句子和单词级别的文本。我们将其命名为CEFER(情感识别的上下文和情感嵌入式框架)。我们的四个方法是通过同时考虑整个句子和每个单词以及隐式和明确的情绪来提取数据。从这些数据中获得的知识不仅减轻了前面方法中缺陷的影响,而且还可以增强特征向量。我们使用BERT家族评估几个功能空间,并根据其设计CEFER。 CEFER将每个单词的情感向量(包括明确和隐性情绪)与基于上下文的每个单词的特征向量相结合。 CEFER的表现比Bert家族更好。实验结果表明,识别隐性情绪比检测明确的情绪更具挑战性。 CEFER,提高了隐性情绪识别的准确性。根据结果​​,CEFER在识别明确的情绪和隐性中的3%方面的表现要比BERT家族好5%。
translated by 谷歌翻译
State-of-the-art text simplification (TS) systems adopt end-to-end neural network models to directly generate the simplified version of the input text, and usually function as a blackbox. Moreover, TS is usually treated as an all-purpose generic task under the assumption of homogeneity, where the same simplification is suitable for all. In recent years, however, there has been increasing recognition of the need to adapt the simplification techniques to the specific needs of different target groups. In this work, we aim to advance current research on explainable and controllable TS in two ways: First, building on recently proposed work to increase the transparency of TS systems, we use a large set of (psycho-)linguistic features in combination with pre-trained language models to improve explainable complexity prediction. Second, based on the results of this preliminary task, we extend a state-of-the-art Seq2Seq TS model, ACCESS, to enable explicit control of ten attributes. The results of experiments show (1) that our approach improves the performance of state-of-the-art models for predicting explainable complexity and (2) that explicitly conditioning the Seq2Seq model on ten attributes leads to a significant improvement in performance in both within-domain and out-of-domain settings.
translated by 谷歌翻译
Social networking sites, blogs, and online articles are instant sources of news for internet users globally. However, in the absence of strict regulations mandating the genuineness of every text on social media, it is probable that some of these texts are fake news or rumours. Their deceptive nature and ability to propagate instantly can have an adverse effect on society. This necessitates the need for more effective detection of fake news and rumours on the web. In this work, we annotate four fake news detection and rumour detection datasets with their emotion class labels using transfer learning. We show the correlation between the legitimacy of a text with its intrinsic emotion for fake news and rumour detection, and prove that even within the same emotion class, fake and real news are often represented differently, which can be used for improved feature extraction. Based on this, we propose a multi-task framework for fake news and rumour detection, predicting both the emotion and legitimacy of the text. We train a variety of deep learning models in single-task and multi-task settings for a more comprehensive comparison. We further analyze the performance of our multi-task approach for fake news detection in cross-domain settings to verify its efficacy for better generalization across datasets, and to verify that emotions act as a domain-independent feature. Experimental results verify that our multi-task models consistently outperform their single-task counterparts in terms of accuracy, precision, recall, and F1 score, both for in-domain and cross-domain settings. We also qualitatively analyze the difference in performance in single-task and multi-task learning models.
translated by 谷歌翻译
对仇恨言论和冒犯性语言(HOF)的认可通常是作为一项分类任务,以决定文本是否包含HOF。我们研究HOF检测是否可以通过考虑HOF和类似概念之间的关系来获利:(a)HOF与情感分析有关,因为仇恨言论通常是负面陈述并表达了负面意见; (b)这与情绪分析有关,因为表达的仇恨指向作者经历(或假装体验)愤怒的同时经历(或旨在体验)恐惧。 (c)最后,HOF的一个构成要素是提及目标人或群体。在此基础上,我们假设HOF检测在与这些概念共同建模时,在多任务学习设置中进行了改进。我们将实验基于这些概念的现有数据集(情感,情感,HOF的目标),并在Hasoc Fire 2021英语子任务1A中评估我们的模型作为参与者(作为IMS-Sinai团队)。基于模型选择实验,我们考虑了多个可用的资源和共享任务的提交,我们发现人群情绪语料库,Semeval 2016年情感语料库和犯罪2019年目标检测数据的组合导致F1 =。 79在基于BERT的多任务多任务学习模型中,与Plain Bert的.7895相比。在HASOC 2019测试数据上,该结果更为巨大,而F1中的增加2pp和召回大幅增加。在两个数据集(2019,2021)中,HOF类的召回量尤其增加(2019年数据的6pp和2021数据的3pp),表明MTL具有情感,情感和目标识别是适合的方法可能部署在社交媒体平台中的预警系统。
translated by 谷歌翻译
软件工程(SE)中的情感分析表明了承诺分析和支持各种发展活动。我们报告了经验研究的结果,以确定我们通过组合独立的SE特定情绪探测器的极性标签来确定开发集合发动机的可行性。我们的研究有两个阶段。在第一阶段,我们通过Lin等人从最近发表的两篇论文中选择了五个特定的情绪检测工具。 [31,32],谁首先报告了独立的情绪探测器的负面结果,然后提出了改进的SE特异性情绪检测器,POME [31]。我们向第17,581个单位(句子/文件)报告来自六个目前可用情绪基准的17,581个单位(句子/文件)。我们发现现有工具可以在85-95%的情况下互补,即,一个是错误的,但另一个是对的。然而,这些工具的大多数基于投票的集合未能提高情绪检测的准确性。我们通过将极性标签和单词袋作为特征组合来开发Sentisead,一个受监督的工具。 Sentisead将各个工具的性能(F1分数)提高了4%(Over Senti4SD [5]) - 100%(通过Pome [31])。在第二阶段,我们使用预先培训的变压器模型(PTM)进行比较和改进Sentisead基础架构。我们发现,带Roberta的Sentisead基础架构作为来自Lin等人的五个独立规则和浅学习的SE特定工具的集合。 [31,32]在六个数据集中提供0.805的最佳F1分数,而独立罗伯塔显示F1分数为0.801。
translated by 谷歌翻译
通讯和社交网络可以从分析师和公众提供公司提供的产品和/或服务的角度来反映市场和特定股票的意见。因此,这些文本的情感分析可以提供有用的信息,以帮助投资者在市场上进行贸易。在本文中,建议通过预测-1和+1之间的范围内的分数(数据类型Rime)来确定与公司和股票相关的情绪。具体而言,我们精细调整了罗伯塔模型来处理头条和微博,并将其与其他变压器层组合,以处理与情绪词典的句子分析,以改善情绪分析。我们在Semeval-2017任务5发布的财务数据上进行了评估,我们的命题优于Semeval-2017任务5和强基线的最佳系统。实际上,与财务和一般情绪词典的上下文句子分析的组合为我们的模型提供了有用的信息,并允许它产生更可靠的情感分数。
translated by 谷歌翻译
人格检测是心理学和自动人格预测(或感知)(APP)的一个古老话题,是对不同类型的人类生成/交换内容(例如文本,语音,图像,视频,视频)对个性的自动化(计算)预测。这项研究的主要目的是自2010年以来对应用程序的自然语言处理方法进行浅(总体)审查。随着深度学习的出现并遵循NLP的转移学习和预先培训的模型,应用程序研究领域已经成为一个热门话题,因此在这篇评论中,方法分为三个;预先训练的独立,预训练的基于模型的多模式方法。此外,为了获得全面的比较,数据集为报告的结果提供了信息。
translated by 谷歌翻译
Hope is characterized as openness of spirit toward the future, a desire, expectation, and wish for something to happen or to be true that remarkably affects human's state of mind, emotions, behaviors, and decisions. Hope is usually associated with concepts of desired expectations and possibility/probability concerning the future. Despite its importance, hope has rarely been studied as a social media analysis task. This paper presents a hope speech dataset that classifies each tweet first into "Hope" and "Not Hope", then into three fine-grained hope categories: "Generalized Hope", "Realistic Hope", and "Unrealistic Hope" (along with "Not Hope"). English tweets in the first half of 2022 were collected to build this dataset. Furthermore, we describe our annotation process and guidelines in detail and discuss the challenges of classifying hope and the limitations of the existing hope speech detection corpora. In addition, we reported several baselines based on different learning approaches, such as traditional machine learning, deep learning, and transformers, to benchmark our dataset. We evaluated our baselines using weighted-averaged and macro-averaged F1-scores. Observations show that a strict process for annotator selection and detailed annotation guidelines enhanced the dataset's quality. This strict annotation process resulted in promising performance for simple machine learning classifiers with only bi-grams; however, binary and multiclass hope speech detection results reveal that contextual embedding models have higher performance in this dataset.
translated by 谷歌翻译
The sentiment analysis task has various applications in practice. In the sentiment analysis task, words and phrases that represent positive and negative emotions are important. Finding out the words that represent the emotion from the text can improve the performance of the classification models for the sentiment analysis task. In this paper, we propose a methodology that combines the emotion lexicon with the classification model to enhance the accuracy of the models. Our experimental results show that the emotion lexicon combined with the classification model improves the performance of models.
translated by 谷歌翻译
预期观众对某些文本的反应是社会的几个方面不可或缺的,包括政治,研究和商业行业。情感分析(SA)是一种有用的自然语言处理(NLP)技术,它利用词汇/统计和深度学习方法来确定不同尺寸的文本是否表现出正面,负面或中立的情绪。但是,目前缺乏工具来分析独立文本的组并从整体中提取主要情感。因此,当前的论文提出了一种新型算法,称为多层推文分析仪(MLTA),该算法使用多层网络(MLN)以图形方式对社交媒体文本进行了图形方式,以便更好地编码跨越独立的推文集的关系。与其他表示方法相比,图结构能够捕获复杂生态系统中有意义的关系。最先进的图形神经网络(GNN)用于从Tweet-MLN中提取信息,并根据提取的图形特征进行预测。结果表明,与标准的正面,负或中性相比,MLTA不仅可以从更大的可能情绪中预测,从而提供了更准确的情感,还允许对Twitter数据进行准确的组级预测。
translated by 谷歌翻译
The widespread of offensive content online, such as hate speech and cyber-bullying, is a global phenomenon. This has sparked interest in the artificial intelligence (AI) and natural language processing (NLP) communities, motivating the development of various systems trained to detect potentially harmful content automatically. These systems require annotated datasets to train the machine learning (ML) models. However, with a few notable exceptions, most datasets on this topic have dealt with English and a few other high-resource languages. As a result, the research in offensive language identification has been limited to these languages. This paper addresses this gap by tackling offensive language identification in Sinhala, a low-resource Indo-Aryan language spoken by over 17 million people in Sri Lanka. We introduce the Sinhala Offensive Language Dataset (SOLD) and present multiple experiments on this dataset. SOLD is a manually annotated dataset containing 10,000 posts from Twitter annotated as offensive and not offensive at both sentence-level and token-level, improving the explainability of the ML models. SOLD is the first large publicly available offensive language dataset compiled for Sinhala. We also introduce SemiSOLD, a larger dataset containing more than 145,000 Sinhala tweets, annotated following a semi-supervised approach.
translated by 谷歌翻译
Understanding customer feedback is becoming a necessity for companies to identify problems and improve their products and services. Text classification and sentiment analysis can play a major role in analyzing this data by using a variety of machine and deep learning approaches. In this work, different transformer-based models are utilized to explore how efficient these models are when working with a German customer feedback dataset. In addition, these pre-trained models are further analyzed to determine if adapting them to a specific domain using unlabeled data can yield better results than off-the-shelf pre-trained models. To evaluate the models, two downstream tasks from the GermEval 2017 are considered. The experimental results show that transformer-based models can reach significant improvements compared to a fastText baseline and outperform the published scores and previous models. For the subtask Relevance Classification, the best models achieve a micro-averaged $F1$-Score of 96.1 % on the first test set and 95.9 % on the second one, and a score of 85.1 % and 85.3 % for the subtask Polarity Classification.
translated by 谷歌翻译
在线评论对客户的购买决策有了重大影响,以满足任何产品或服务。但是,假审查可以误导消费者和公司。已经开发了几种模型来使用机器学习方法检测假审查。许多这些模型具有一些限制,导致在虚假和真正的评论之间具有低准确性。这些模型仅集中在语言特征上,以检测虚假评论,未能捕获评论的语义含义。要解决此问题,本文提出了一种新的集合模型,采用变换器架构,以在一系列虚假评论中发现隐藏的模式并准确地检测它们。该拟议方法结合了三种变压器模型来提高虚假和真正行为分析和建模的鲁棒性,以检测虚假评论。使用半真实基准数据集的实验结果显示了拟议的型号模型的优越性。
translated by 谷歌翻译
从文本数据中推断出具有政治收费的信息是文本和作者级别的自然语言处理(NLP)的流行研究主题。近年来,对这种研究的研究是在伯特等变形金刚的代表性的帮助下进行的。尽管取得了很大的成功,但我们可能会询问是否通过将基于转换的模型与其他知识表示形式相结合,是否可以进一步改善结果。为了阐明这个问题,本工作描述了一系列实验,以比较英语和葡萄牙语中文本的政治推断的替代模型配置。结果表明,某些文本表示形式 - 特别是,BERT预训练的语言模型与句法依赖模型的联合使用可能胜过多个实验环境的替代方案,这是进一步研究异质文本表示的潜在强大案例在这些以及可能的其他NLP任务中。
translated by 谷歌翻译
社交媒体网络已成为人们生活的重要方面,它是其思想,观点和情感的平台。因此,自动化情绪分析(SA)对于以其他信息来源无法识别人们的感受至关重要。对这些感觉的分析揭示了各种应用,包括品牌评估,YouTube电影评论和医疗保健应用。随着社交媒体的不断发展,人们以不同形式发布大量信息,包括文本,照片,音频和视频。因此,传统的SA算法已变得有限,因为它们不考虑其他方式的表现力。通过包括来自各种物质来源的此类特征,这些多模式数据流提供了新的机会,以优化基于文本的SA之外的预期结果。我们的研究重点是多模式SA的最前沿领域,该领域研究了社交媒体网络上发布的视觉和文本数据。许多人更有可能利用这些信息在这些平台上表达自己。为了作为这个快速增长的领域的学者资源,我们介绍了文本和视觉SA的全面概述,包括数据预处理,功能提取技术,情感基准数据集以及适合每个字段的多重分类方法的疗效。我们还简要介绍了最常用的数据融合策略,并提供了有关Visual Textual SA的现有研究的摘要。最后,我们重点介绍了最重大的挑战,并调查了一些重要的情感应用程序。
translated by 谷歌翻译