在线仇恨言语检测已随着数字设备的增长而变得重要,但是英语以外的其他语言资源非常有限。我们介绍了K-MHAS,这是一种新的多标签数据集,用于仇恨言语检测,可有效处理韩国语言模式。该数据集由新闻评论中的109k话语组成,并提供了从1到4个标签的多标签分类,并处理主观性和相交性。我们评估了K-MHAS上强的基线。Kr-Bert带有子字符的代币器优于表现,在每个仇恨言论类中都认识到分解的角色。
translated by 谷歌翻译
Recent directions for offensive language detection are hierarchical modeling, identifying the type and the target of offensive language, and interpretability with offensive span annotation and prediction. These improvements are focused on English and do not transfer well to other languages because of cultural and linguistic differences. In this paper, we present the Korean Offensive Language Dataset (KOLD) comprising 40,429 comments, which are annotated hierarchically with the type and the target of offensive language, accompanied by annotations of the corresponding text spans. We collect the comments from NAVER news and YouTube platform and provide the titles of the articles and videos as the context information for the annotation process. We use these annotated comments as training data for Korean BERT and RoBERTa models and find that they are effective at offensiveness detection, target classification, and target span detection while having room for improvement for target group classification and offensive span detection. We discover that the target group distribution differs drastically from the existing English datasets, and observe that providing the context information improves the model performance in offensiveness detection (+0.3), target classification (+1.5), and target group classification (+13.1). We publicly release the dataset and baseline models.
translated by 谷歌翻译
社交媒体平台上的滥用内容的增长增加对在线用户的负面影响。对女同性恋,同性恋者,跨性别或双性恋者的恐惧,不喜欢,不适或不疑虑被定义为同性恋/转铁症。同性恋/翻译语音是一种令人反感的语言,可以总结为针对LGBT +人的仇恨语音,近年来越来越受到兴趣。在线同性恋恐惧症/ Transphobobia是一个严重的社会问题,可以使网上平台与LGBT +人有毒和不受欢迎,同时还试图消除平等,多样性和包容性。我们为在线同性恋和转鸟以及专家标记的数据集提供了新的分类分类,这将允许自动识别出具有同种异体/传递内容的数据集。我们受过教育的注释器并以综合的注释规则向他们提供,因为这是一个敏感的问题,我们以前发现未受训练的众包注释者因文化和其他偏见而诊断倡导性的群体。数据集包含15,141个注释的多语言评论。本文介绍了构建数据集,数据的定性分析和注册间协议的过程。此外,我们为数据集创建基线模型。据我们所知,我们的数据集是第一个已创建的数据集。警告:本文含有明确的同性恋,转基因症,刻板印象的明确陈述,这可能对某些读者令人痛苦。
translated by 谷歌翻译
Hope is characterized as openness of spirit toward the future, a desire, expectation, and wish for something to happen or to be true that remarkably affects human's state of mind, emotions, behaviors, and decisions. Hope is usually associated with concepts of desired expectations and possibility/probability concerning the future. Despite its importance, hope has rarely been studied as a social media analysis task. This paper presents a hope speech dataset that classifies each tweet first into "Hope" and "Not Hope", then into three fine-grained hope categories: "Generalized Hope", "Realistic Hope", and "Unrealistic Hope" (along with "Not Hope"). English tweets in the first half of 2022 were collected to build this dataset. Furthermore, we describe our annotation process and guidelines in detail and discuss the challenges of classifying hope and the limitations of the existing hope speech detection corpora. In addition, we reported several baselines based on different learning approaches, such as traditional machine learning, deep learning, and transformers, to benchmark our dataset. We evaluated our baselines using weighted-averaged and macro-averaged F1-scores. Observations show that a strict process for annotator selection and detailed annotation guidelines enhanced the dataset's quality. This strict annotation process resulted in promising performance for simple machine learning classifiers with only bi-grams; however, binary and multiclass hope speech detection results reveal that contextual embedding models have higher performance in this dataset.
translated by 谷歌翻译
Due to the severity of the social media offensive and hateful comments in Brazil, and the lack of research in Portuguese, this paper provides the first large-scale expert annotated corpus of Brazilian Instagram comments for hate speech and offensive language detection. The HateBR corpus was collected from the comment section of Brazilian politicians' accounts on Instagram and manually annotated by specialists, reaching a high inter-annotator agreement. The corpus consists of 7,000 documents annotated according to three different layers: a binary classification (offensive versus non-offensive comments), offensiveness-level classification (highly, moderately, and slightly offensive), and nine hate speech groups (xenophobia, racism, homophobia, sexism, religious intolerance, partyism, apology for the dictatorship, antisemitism, and fatphobia). We also implemented baseline experiments for offensive language and hate speech detection and compared them with a literature baseline. Results show that the baseline experiments on our corpus outperform the current state-of-the-art for the Portuguese language.
translated by 谷歌翻译
仇恨言语检测模型通常在持有的测试集上评估。但是,这有可能因为仇恨言语数据集中越来越有据可查的系统差距和偏见,因此绘制模型性能的不完整且潜在的误导性图片。为了实现更多针对性的诊断见解,最近的研究引入了仇恨言语检测模型的功能测试。但是,这些测试目前仅针对英语内容,这意味着它们无法支持全球数十亿语言所说的其他语言中更有效模型的开发。为了帮助解决这个问题,我们介绍了多语言Hatecheck(MHC),这是一套用于多语言仇恨言语检测模型的功能测试。 MHC涵盖了跨十种语言的34个功能,这比任何其他仇恨语音数据集更多。为了说明MHC的效用,我们训练和测试了高性能的多语言仇恨语音检测模型,并揭示了单语和跨语性应用的关键模型弱点。
translated by 谷歌翻译
已经开发了许多方法,以通过消除社交媒体平台的庸俗,令人反感和激烈的评论来监测现代岁月中的消极性传播。然而,存在相对较少的研究,这些研究会收敛于拥抱积极性,加强在线论坛中的支持性和放心内容。因此,我们建议创建英国kannada希望语音数据集,Kanhope并比较几个实验来基准数据集。 DataSet由6,176个用户生成的评论组成,代码混合kannada从YouTube刮擦并手动注释为轴承希望语音或不希望的演讲。此外,我们介绍了DC-BERT4HOPE,一种使用Kanhope的英语翻译进行额外培训的双通道模型,以促进希望语音检测。该方法实现了0.756的加权F1分数,更好的其他模型。从此,卡霍普旨在促进坎卡达的研究,同时促进研究人员,以鼓励,积极和支持的在线内容中务实的方法。
translated by 谷歌翻译
This paper presents work on detecting misogyny in the comments of a large Austrian German language newspaper forum. We describe the creation of a corpus of 6600 comments which were annotated with 5 levels of misogyny. The forum moderators were involved as experts in the creation of the annotation guidelines and the annotation of the comments. We also describe the results of training transformer-based classification models for both binarized and original label classification of that corpus.
translated by 谷歌翻译
构建用于仇恨语音检测的基准数据集具有各种挑战。首先,因为仇恨的言论相对少见,随机抽样对诠释的推文是非常效率的发现仇恨。为了解决此问题,先前的数据集通常仅包含匹配已知的“讨厌字”的推文。然而,将数据限制为预定义的词汇表可能排除我们寻求模型的现实世界现象的部分。第二个挑战是仇恨言论的定义往往是高度不同和主观的。具有多种讨论仇恨言论的注释者可能不仅可能不同意彼此不同意,而且还努力符合指定的标签指南。我们的重点识别是仇恨语音的罕见和主体性类似于信息检索(IR)中的相关性。此连接表明,可以有效地应用创建IR测试集合的良好方法,以创建更好的基准数据集以进行仇恨语音。为了智能和有效地选择要注释的推文,我们应用{\ em汇集}和{em主动学习}的标准IR技术。为了提高注释的一致性和价值,我们应用{\ EM任务分解}和{\ EM注释器理由}技术。我们在Twitter上共享一个用于仇恨语音检测的新基准数据集,其提供比以前的数据集更广泛的仇恨覆盖。在这些更广泛形式的仇恨中测试时,我们还表现出现有检测模型的准确性的戏剧性降低。注册器理由我们不仅可以证明标签决策证明,而且还可以在建模中实现未来的双重监督和/或解释生成的工作机会。我们的方法的进一步细节可以在补充材料中找到。
translated by 谷歌翻译
The shift of public debate to the digital sphere has been accompanied by a rise in online hate speech. While many promising approaches for hate speech classification have been proposed, studies often focus only on a single language, usually English, and do not address three key concerns: post-deployment performance, classifier maintenance and infrastructural limitations. In this paper, we introduce a new human-in-the-loop BERT-based hate speech classification pipeline and trace its development from initial data collection and annotation all the way to post-deployment. Our classifier, trained using data from our original corpus of over 422k examples, is specifically developed for the inherently multilingual setting of Switzerland and outperforms with its F1 score of 80.5 the currently best-performing BERT-based multilingual classifier by 5.8 F1 points in German and 3.6 F1 points in French. Our systematic evaluations over a 12-month period further highlight the vital importance of continuous, human-in-the-loop classifier maintenance to ensure robust hate speech classification post-deployment.
translated by 谷歌翻译
文本分类是具有各种有趣应用程序的典型自然语言处理或计算语言学任务。随着社交媒体平台上的用户数量的增加,数据加速促进了有关社交媒体文本分类(SMTC)或社交媒体文本挖掘的新兴研究。与英语相比,越南人是低资源语言之一,仍然没有集中精力并彻底利用。受胶水成功的启发,我们介绍了社交媒体文本分类评估(SMTCE)基准,作为各种SMTC任务的数据集和模型的集合。借助拟议的基准,我们实施和分析了各种基于BERT的模型(Mbert,XLM-R和Distilmbert)和基于单语的BERT模型(Phobert,Vibert,Vibert,Velectra和Vibert4news)的有效性SMTCE基准。单语模型优于多语言模型,并实现所有文本分类任务的最新结果。它提供了基于基准的多语言和单语言模型的客观评估,该模型将使越南语言中有关贝尔特兰的未来研究有利。
translated by 谷歌翻译
仇恨言论等攻击性内容的广泛构成了越来越多的社会问题。 AI工具是支持在线平台的审核过程所必需的。为了评估这些识别工具,需要与不同语言的数据集进行连续实验。 HASOC轨道(仇恨语音和冒犯性内容识别)专用于为此目的开发基准数据。本文介绍了英语,印地语和马拉地赛的Hasoc Subtrack。数据集由Twitter组装。此子系统有两个子任务。任务A是为所有三种语言提供的二进制分类问题(仇恨而非冒犯)。任务B是三个课程(仇恨)仇恨言论,令人攻击和亵渎为英语和印地语提供的细粒度分类问题。总体而言,652名队伍提交了652次。任务A最佳分类算法的性能分别为Marathi,印地语和英语的0.91,0.78和0.83尺寸。此概述介绍了任务和数据开发以及详细结果。提交竞争的系统应用了各种技术。最好的表演算法主要是变压器架构的变种。
translated by 谷歌翻译
在本文中,我们讨论了用分层,细粒度标记标记不同类型的侵略和“上下文”的分层的多语言数据集的开发。这里,这里,这里由对话线程定义,其中发生特定的评论以及评论对先前注释执行的话语角色的“类型”。在此处讨论的初始数据集(并作为逗号@图标共享任务的一部分提供),包括四种语言的15,000名注释评论 - Meitei,Bangla,Hindi和印度英语 - 从各种社交媒体平台收集作为Youtube,Facebook,Twitter和电报。正如通常在社交媒体网站上,大量这些评论都是多语种的,主要是与英语混合的代码混合。本文给出了用于注释的标签的详细描述以及开发多标签的过程的过程,该方法可用于标记具有各种侵略和偏差的评论,包括性别偏见,宗教不宽容(称为标签中的公共偏见),类/种姓偏见和民族/种族偏见。我们还定义并讨论已用于标记通过评论执行的异常发挥作用的标记的标签,例如攻击,防御等。我们还对数据集的统计分析以及我们的基线实验的结果进行了发展使用DataSet开发的自动攻击识别系统。
translated by 谷歌翻译
Social media platforms allow users to freely share their opinions about issues or anything they feel like. However, they also make it easier to spread hate and abusive content. The Fulani ethnic group has been the victim of this unfortunate phenomenon. This paper introduces the HERDPhobia - the first annotated hate speech dataset on Fulani herders in Nigeria - in three languages: English, Nigerian-Pidgin, and Hausa. We present a benchmark experiment using pre-trained languages models to classify the tweets as either hateful or non-hateful. Our experiment shows that the XML-T model provides better performance with 99.83% weighted F1. We released the dataset at https://github.com/hausanlp/HERDPhobia for further research.
translated by 谷歌翻译
Since a lexicon-based approach is more elegant scientifically, explaining the solution components and being easier to generalize to other applications, this paper provides a new approach for offensive language and hate speech detection on social media. Our approach embodies a lexicon of implicit and explicit offensive and swearing expressions annotated with contextual information. Due to the severity of the social media abusive comments in Brazil, and the lack of research in Portuguese, Brazilian Portuguese is the language used to validate the models. Nevertheless, our method may be applied to any other language. The conducted experiments show the effectiveness of the proposed approach, outperforming the current baseline methods for the Portuguese language.
translated by 谷歌翻译
In light of unprecedented increases in the popularity of the internet and social media, comment moderation has never been a more relevant task. Semi-automated comment moderation systems greatly aid human moderators by either automatically classifying the examples or allowing the moderators to prioritize which comments to consider first. However, the concept of inappropriate content is often subjective, and such content can be conveyed in many subtle and indirect ways. In this work, we propose CoRAL -- a language and culturally aware Croatian Abusive dataset covering phenomena of implicitness and reliance on local and global context. We show experimentally that current models degrade when comments are not explicit and further degrade when language skill and context knowledge are required to interpret the comment.
translated by 谷歌翻译
As natural language processing (NLP) for gender bias becomes a significant interdisciplinary topic, the prevalent data-driven techniques such as large-scale language models suffer from data inadequacy and biased corpus, especially for languages with insufficient resources such as Chinese. To this end, we propose a Chinese cOrpus foR Gender bIas Probing and Mitigation CORGI-PM, which contains 32.9k sentences with high-quality labels derived by following an annotation scheme specifically developed for gender bias in the Chinese context. Moreover, we address three challenges for automatic textual gender bias mitigation, which requires the models to detect, classify, and mitigate textual gender bias. We also conduct experiments with state-of-the-art language models to provide baselines. To our best knowledge, CORGI-PM is the first sentence-level Chinese corpus for gender bias probing and mitigation.
translated by 谷歌翻译
近年来,我们看到了处理敏感个人信息的应用程序(包括对话系统)的指数增长。这已经揭示了在虚拟环境中有关个人数据保护的极为重要的问题。首先,性能模型应该能够区分敏感内容与中性句子的句子。其次,它应该能够识别其中包含的个人数据类别的类型。这样,可以考虑每个类别的不同隐私处理。在文献中,如果有关于自动敏感数据识别的作品,则通常在没有共同基准的不同域或语言上进行。为了填补这一空白,在这项工作中,我们介绍了SPEDAC,这是一个新的注释基准,用于识别敏感的个人数据类别。此外,我们提供了对数据集的广泛评估,该数据集使用不同的基准和基于Roberta的分类器进行的,这是一种神经体系结构,在检测敏感句子和个人数据类别的分类方面实现了强大的性能。
translated by 谷歌翻译
仇恨言语检测模型的性能取决于对模型的训练数据集。现有的数据集大部分是由有限数量的实例或定义仇恨主题的仇恨域准备的。这阻碍了关于仇恨领域的大规模分析和转移学习。在这项研究中,我们构建了大规模的推文数据集,以用英语和低资源语言(土耳其语)进行仇恨言论检测,每个人都由每个标签的100k推文组成。我们的数据集设计为在五个域上分布的推文数量相等。统计测试支持的实验结果表明,基于变压器的语言模型的表现优于传统词袋和神经模型的英语至少5%,而土耳其语则优于大规模仇恨言语检测。该性能也可扩展到不同的训练规模,在使用20%的培训实例时,将回收98%的英语表现和土耳其语的97%。我们进一步研究了仇恨领域之间跨域转移的概括能力。我们表明,其他英语域平均有96%的目标域性能恢复,而土耳其语为92%。性别和宗教更成功地概括到其他领域,而体育运动最大。
translated by 谷歌翻译
随着社交媒体平台影响的增长,滥用的影响变得越来越有影响力。自动检测威胁和滥用语言的重要性不能高估。但是,大多数现有的研究和最先进的方法都以英语为目标语言,对低资产品语言的工作有限。在本文中,我们介绍了乌尔都语的两项滥用和威胁性语言检测的任务,该任务在全球范围内拥有超过1.7亿扬声器。两者都被视为二进制分类任务,其中需要参与系统将乌尔都语中的推文分类为两个类别,即:(i)第一个任务的滥用和不滥用,以及(ii)第二次威胁和不威胁。我们提供两个手动注释的数据集,其中包含标有(i)滥用和非虐待的推文,以及(ii)威胁和无威胁。滥用数据集在火车零件中包含2400个注释的推文,测试部分中包含1100个注释的推文。威胁数据集在火车部分中包含6000个注释的推文,测试部分中包含3950个注释的推文。我们还为这两个任务提供了逻辑回归和基于BERT的基线分类器。在这项共同的任务中,来自六个国家的21个团队注册参加了参与(印度,巴基斯坦,中国,马来西亚,阿拉伯联合酋长国和台湾),有10个团队提交了子任务A的奔跑,这是虐待语言检测,9个团队提交了他们的奔跑对于正在威胁语言检测的子任务B,七个团队提交了技术报告。最佳性能系统达到子任务A的F1得分值为0.880,子任务为0.545。对于两个子任务,基于M-Bert的变压器模型都表现出最佳性能。
translated by 谷歌翻译