本文介绍了我们对SMM4H 2022共享任务的提交,内容涉及自我报告的亲密伴侣暴力在Twitter上(英语)。这项任务的目的是准确确定给定推文的内容是否证明了某人报告自己的亲密伴侣暴力经历。提交的系统是五个罗伯塔模型组成的合奏,每个模型各自在验证数据集上由各自的F1分数加权。该系统的性能比基线要好13%,并且是该共享任务的总体性能最佳系统。
translated by 谷歌翻译
社交网络数据评估的自动化是自然语言处理的经典挑战之一。在共同199年的大流行期间,关于了解健康命令的态度,公共信息中的采矿人们的立场变得至关重要。在本文中,作者提出了基于变压器体系结构的预测模型,以对Twitter文本中的前提进行分类。这项工作是作为2022年社交媒体挖掘(SMM4H)研讨会的一部分完成的。我们探索了现代变压器的分类器,以便构建管道有效地捕获推文语义。我们在Twitter数据集上的实验表明,在前提预测任务的情况下,罗伯塔(Roberta)优于其他变压器模型。该模型在ROC AUC值0.807方面实现了竞争性能,而F1得分为0.7648。
translated by 谷歌翻译
生物重建VII轨道3挑战重点是在Twitter用户时间表中识别药物名称。对于我们提交这一挑战,我们通过使用多种数据增强技术扩展了可用的培训数据。然后,增强数据用于微调在一般域推特内容上预先培训的语言模型的集合。拟议的方法优于先前的最先进的算法Kusuri,并在竞争中排名高,为我们所选择的客观函数重叠F1分数。
translated by 谷歌翻译
在目前的互联网时代,社交媒体平台容易抵达每个人,由于与演员,信条,性别,宗教甚至接受,人们往往必须应对威胁,身份攻击,仇恨和欺凌或拒绝概念。仇恨语音检测中的现有工作主要关注各个评论分类作为序列标签任务,并且经常无法考虑对话的上下文。在确定作者的意图和发布后的情绪时,谈话的上下文通常在促进推文背后的情绪时发挥着重要作用。本文介绍了哈索克 - IIITD团队 - IIITD的系统提出的系统,该系统是第一个共享任务,专注于检测来自推特上的HINDI英语代码混合对话的仇恨语音。我们使用神经网络接近此问题,利用变压器的交叉逻辑嵌入,并进一步向他们提供低资源仇恨语音分类,以便在音译后的印度文本中进行低资源仇恨语音分类。我们最好的表演系统,一项艰难的投票集合,XLM-Roberta和多语言伯特,实现了0.7253的宏F1得分,首先在整个排行榜榜上放置我们。
translated by 谷歌翻译
社交媒体帖子包含有关医疗条件和与健康相关行为的潜在有价值的信息。生物重建VII任务3专注于通过识别推文中的药物和膳食补充剂的提及来挖掘这些信息。我们通过精细调整多个BERT样式语言模型来执行此任务以执行令牌级分类,并将它们组合成集合以生成最终预测。我们最好的系统由五个Megatron-Bert-345M型号组成,在看不见的测试数据上实现了0.764的严格F1得分。
translated by 谷歌翻译
随着在线社交媒体提供的沟通自由,仇恨言论越来越多地产生。这导致网络冲突影响个人和国家一级的社会生活。结果,在发送到社交网络之前,仇恨的内容分类越来越需要过滤仇恨内容。本文着重于使用多个深层模型在社交媒体中对仇恨言论进行分类,这些模型通过整合了最近的基于变压器的语言模型,例如BERT和神经网络。为了改善分类性能,我们通过几种合奏技术进行了评估,包括软投票,最大价值,硬投票和堆叠。我们使用了三个公开可用的Twitter数据集(Davidson,Hateval2019,OLID)来识别进攻性语言。我们融合了所有这些数据集以生成单个数据集(DHO数据集),该数据集在不同的标签上更加平衡,以执行多标签分类。我们的实验已在Davidson数据集和Dho Corpora上举行。后来给出了最佳的总体结果,尤其是F1宏观分数,即使它需要更多的资源(时间执行和内存)。实验显示了良好的结果,尤其是整体模型,其中堆叠在Davidson数据集上的F1得分为97%,并且在DHO数据集上汇总合奏的77%。
translated by 谷歌翻译
This paper presents our solutions for the MediaEval 2022 task on DisasterMM. The task is composed of two subtasks, namely (i) Relevance Classification of Twitter Posts (RCTP), and (ii) Location Extraction from Twitter Texts (LETT). The RCTP subtask aims at differentiating flood-related and non-relevant social posts while LETT is a Named Entity Recognition (NER) task and aims at the extraction of location information from the text. For RCTP, we proposed four different solutions based on BERT, RoBERTa, Distil BERT, and ALBERT obtaining an F1-score of 0.7934, 0.7970, 0.7613, and 0.7924, respectively. For LETT, we used three models namely BERT, RoBERTa, and Distil BERTA obtaining an F1-score of 0.6256, 0.6744, and 0.6723, respectively.
translated by 谷歌翻译
为了解决检测到令人反感的评论/帖子的难题,这些评论/帖子具有很多非正式的,非结构化,错误的和码混合,我们在本研究论文中介绍了两种发明方法。社交媒体平台上的攻击性评论/帖子,可以影响个人,团体或未成年人。为了对两个受欢迎的Dravidian语言,泰米尔和马拉雅拉姆分类,作为哈索克的一部分 - Dravidiancodemix Fire 2021共享任务,我们采用了两个基于变压器的原型,该原型成功地站在前8名以获得所有任务。可以查看和使用我们方法的代码。
translated by 谷歌翻译
社交媒体的重要性在过去几十年中增加了流畅,因为它帮助人们甚至是世界上最偏远的角落保持联系。随着技术的出现,数字媒体比以往任何时候都变得更加相关和广泛使用,并且在此之后,假冒新闻和推文的流通中有一种复兴,需要立即关注。在本文中,我们描述了一种新的假新闻检测系统,可自动识别新闻项目是“真实的”或“假”,作为我们在英语挑战中的约束Covid-19假新闻检测中的工作的延伸。我们使用了一个由预先训练的模型组成的集合模型,然后是统计特征融合网络,以及通过在新闻项目或推文中的各种属性,如源,用户名处理,URL域和作者中的各种属性结合到统计特征中的各种属性。我们所提出的框架还规定了可靠的预测性不确定性以及分类任务的适当类别输出置信水平。我们在Covid-19假新闻数据集和Fakenewsnet数据集上评估了我们的结果,以显示所提出的算法在短期内容中检测假新闻以及新闻文章中的算法。我们在Covid-19数据集中获得了0.9892的最佳F1分,以及Fakenewsnet数据集的F1分数为0.9073。
translated by 谷歌翻译
本文介绍了AILAB-UDINE团队为SMM4H 22共享任务开发的模型。我们探索了基于变压器的模型在文本分类,实体提取和实体归一化,解决任务1、2、5、6和10的极限。使用集合学习时的不同体系结构,以及生成模型的巨大潜力,以实现术语归一化。
translated by 谷歌翻译
社交媒体和数字技术的广泛使用促进了有关事件和活动的各种新闻和信息。尽管分享了积极的信息误导和虚假信息,但社交媒体也正在传播。在确定人类专家和自动工具手动的这种误导性信息方面,已经做出了努力。由于包含事实主张的大量信息正在网上出现,手动努力并不能很好地扩展。因此,自动确定值得支票的主张对于人类专家来说非常有用。在这项研究中,我们描述了我们参与子任务-1a:checkthat的推文(英语,荷兰语和西班牙语)的值得检查!在CLEF 2022的实验室。我们执行了标准的预处理步骤,并应用了不同的模型来确定给定文本是否值得事实检查。我们使用过度采样技术来平衡数据集和应用SVM和随机森林(RF)和TF-IDF表示。我们还将BERT多语言(BERT-M)和XLM-ROBERTA-BASE预培训模型用于实验。我们将BERT-M用于官方提交,我们的系统分别在西班牙语,荷兰语和英语中分别排名第三,第五和第十二。在进一步的实验中,我们的评估表明,变压器模型(Bert-M和XLM-Roberta-bas)在荷兰语和英语语言中优于SVM和RF,在荷兰语和英语中,对于西班牙来说,观察到不同的情况。
translated by 谷歌翻译
Migraine is a high-prevalence and disabling neurological disorder. However, information migraine management in real-world settings could be limited to traditional health information sources. In this paper, we (i) verify that there is substantial migraine-related chatter available on social media (Twitter and Reddit), self-reported by migraine sufferers; (ii) develop a platform-independent text classification system for automatically detecting self-reported migraine-related posts, and (iii) conduct analyses of the self-reported posts to assess the utility of social media for studying this problem. We manually annotated 5750 Twitter posts and 302 Reddit posts. Our system achieved an F1 score of 0.90 on Twitter and 0.93 on Reddit. Analysis of information posted by our 'migraine cohort' revealed the presence of a plethora of relevant information about migraine therapies and patient sentiments associated with them. Our study forms the foundation for conducting an in-depth analysis of migraine-related information using social media data.
translated by 谷歌翻译
Covid-19已遍布全球,已经开发了几种疫苗来应对其激增。为了确定与社交媒体帖子中与疫苗相关的正确情感,我们在与Covid-19疫苗相关的推文上微调了各种最新的预训练的变压器模型。具体而言,我们使用最近引入的最先进的预训练的变压器模型Roberta,XLNet和Bert,以及在CoVID-19的推文中预先训练的域特异性变压器模型CT-Bert和Bertweet。我们通过使用基于语言模型的过采样技术(LMOTE)过采样来进一步探索文本扩展的选项,以改善这些模型的准确性,特别是对于小样本数据集,在正面,负面和中性情感类别之间存在不平衡的类别分布。我们的结果总结了我们关于用于微调最先进的预训练的变压器模型的不平衡小样本数据集的文本过采样的适用性,以及针对分类任务的域特异性变压器模型的实用性。
translated by 谷歌翻译
This paper mainly describes the dma submission to the TempoWiC task, which achieves a macro-F1 score of 77.05% and attains the first place in this task. We first explore the impact of different pre-trained language models. Then we adopt data cleaning, data augmentation, and adversarial training strategies to enhance the model generalization and robustness. For further improvement, we integrate POS information and word semantic representation using a Mixture-of-Experts (MoE) approach. The experimental results show that MoE can overcome the feature overuse issue and combine the context, POS, and word semantic features well. Additionally, we use a model ensemble method for the final prediction, which has been proven effective by many research works.
translated by 谷歌翻译
社交网络的快速发展以及互联网可用性的便利性加剧了虚假新闻和社交媒体网站上的谣言的泛滥。在共同19的流行病中,这种误导性信息通过使人们的身心生命处于危险之中,从而加剧了这种情况。为了限制这种不准确性的传播,从在线平台上确定虚假新闻可能是第一步。在这项研究中,作者通过实施了五个基于变压器的模型,例如Bert,Bert没有LSTM,Albert,Roberta和Bert&Albert的混合体,以检测Internet的Covid 19欺诈新闻。Covid 19假新闻数据集已用于培训和测试模型。在所有这些模型中,Roberta模型的性能优于其他模型,通过在真实和虚假类中获得0.98的F1分数。
translated by 谷歌翻译
在Twitter数据中表达的情绪的自动识别具有广泛的应用。我们通过将中性类添加到一个由四种情绪组成的基准数据集中添加中性类来创建一个均衡的数据集:恐惧,悲伤,喜悦和愤怒。在此扩展数据集上,我们研究了来自变压器(BERT)的支持向量机(SVM)和双向编码器表示情感识别的使用。我们通过组合两个BERT和SVM模型来提出一种新颖的合奏模型。实验表明,所提出的模型在推文中的情绪识别方面达到了0.91的最新精度。
translated by 谷歌翻译
As demand for large corpora increases with the size of current state-of-the-art language models, using web data as the main part of the pre-training corpus for these models has become a ubiquitous practice. This, in turn, has introduced an important challenge for NLP practitioners, as they are now confronted with the task of developing highly optimized models and pipelines for pre-processing large quantities of textual data, which implies, effectively classifying and filtering multilingual, heterogeneous and noisy data, at web scale. One of the main components of this pre-processing step for the pre-training corpora of large language models, is the removal of adult and harmful content. In this paper we explore different methods for detecting adult and harmful of content in multilingual heterogeneous web data. We first show how traditional methods in harmful content detection, that seemingly perform quite well in small and specialized datasets quickly break down when confronted with heterogeneous noisy web data. We then resort to using a perplexity based approach but with a twist: Instead of using a so-called "clean" corpus to train a small language model and then use perplexity so select the documents with low perplexity, i.e., the documents that resemble this so-called "clean" corpus the most. We train solely with adult and harmful textual data, and then select the documents having a perplexity value above a given threshold. This approach will virtually cluster our documents into two distinct groups, which will greatly facilitate the choice of the threshold for the perplexity and will also allow us to obtain higher precision than with the traditional classification methods for detecting adult and harmful content.
translated by 谷歌翻译
仇恨言论等攻击性内容的广泛构成了越来越多的社会问题。 AI工具是支持在线平台的审核过程所必需的。为了评估这些识别工具,需要与不同语言的数据集进行连续实验。 HASOC轨道(仇恨语音和冒犯性内容识别)专用于为此目的开发基准数据。本文介绍了英语,印地语和马拉地赛的Hasoc Subtrack。数据集由Twitter组装。此子系统有两个子任务。任务A是为所有三种语言提供的二进制分类问题(仇恨而非冒犯)。任务B是三个课程(仇恨)仇恨言论,令人攻击和亵渎为英语和印地语提供的细粒度分类问题。总体而言,652名队伍提交了652次。任务A最佳分类算法的性能分别为Marathi,印地语和英语的0.91,0.78和0.83尺寸。此概述介绍了任务和数据开发以及详细结果。提交竞争的系统应用了各种技术。最好的表演算法主要是变压器架构的变种。
translated by 谷歌翻译
Automated offensive language detection is essential in combating the spread of hate speech, particularly in social media. This paper describes our work on Offensive Language Identification in low resource Indic language Marathi. The problem is formulated as a text classification task to identify a tweet as offensive or non-offensive. We evaluate different mono-lingual and multi-lingual BERT models on this classification task, focusing on BERT models pre-trained with social media datasets. We compare the performance of MuRIL, MahaTweetBERT, MahaTweetBERT-Hateful, and MahaBERT on the HASOC 2022 test set. We also explore external data augmentation from other existing Marathi hate speech corpus HASOC 2021 and L3Cube-MahaHate. The MahaTweetBERT, a BERT model, pre-trained on Marathi tweets when fine-tuned on the combined dataset (HASOC 2021 + HASOC 2022 + MahaHate), outperforms all models with an F1 score of 98.43 on the HASOC 2022 test set. With this, we also provide a new state-of-the-art result on HASOC 2022 / MOLD v2 test set.
translated by 谷歌翻译
Hope is characterized as openness of spirit toward the future, a desire, expectation, and wish for something to happen or to be true that remarkably affects human's state of mind, emotions, behaviors, and decisions. Hope is usually associated with concepts of desired expectations and possibility/probability concerning the future. Despite its importance, hope has rarely been studied as a social media analysis task. This paper presents a hope speech dataset that classifies each tweet first into "Hope" and "Not Hope", then into three fine-grained hope categories: "Generalized Hope", "Realistic Hope", and "Unrealistic Hope" (along with "Not Hope"). English tweets in the first half of 2022 were collected to build this dataset. Furthermore, we describe our annotation process and guidelines in detail and discuss the challenges of classifying hope and the limitations of the existing hope speech detection corpora. In addition, we reported several baselines based on different learning approaches, such as traditional machine learning, deep learning, and transformers, to benchmark our dataset. We evaluated our baselines using weighted-averaged and macro-averaged F1-scores. Observations show that a strict process for annotator selection and detailed annotation guidelines enhanced the dataset's quality. This strict annotation process resulted in promising performance for simple machine learning classifiers with only bi-grams; however, binary and multiclass hope speech detection results reveal that contextual embedding models have higher performance in this dataset.
translated by 谷歌翻译