Warning: this paper contains content that may be offensive or upsetting. In the current context where online platforms have been effectively weaponized in a variety of geo-political events and social issues, Internet memes make fair content moderation at scale even more difficult. Existing work on meme classification and tracking has focused on black-box methods that do not explicitly consider the semantics of the memes or the context of their creation. In this paper, we pursue a modular and explainable architecture for Internet meme understanding. We design and implement multimodal classification methods that perform example- and prototype-based reasoning over training cases, while leveraging both textual and visual SOTA models to represent the individual cases. We study the relevance of our modular and explainable models in detecting harmful memes on two existing tasks: Hate Speech Detection and Misogyny Classification. We compare the performance between example- and prototype-based methods, and between text, vision, and multimodal models, across different categories of harmfulness (e.g., stereotype and objectification). We devise a user-friendly interface that facilitates the comparative analysis of examples retrieved by all of our models for any given meme, informing the community about the strengths and limitations of these explainable methods.
translated by 谷歌翻译
Warning: this paper contains content that may be offensive or upsetting. Considering the large amount of content created online by the minute, slang-aware automatic tools are critically needed to promote social good, and assist policymakers and moderators in restricting the spread of offensive language, abuse, and hate speech. Despite the success of large language models and the spontaneous emergence of slang dictionaries, it is unclear how far their combination goes in terms of slang understanding for downstream social good tasks. In this paper, we provide a framework to study different combinations of representation learning models and knowledge resources for a variety of downstream tasks that rely on slang understanding. Our experiments show the superiority of models that have been pre-trained on social media data, while the impact of dictionaries is positive only for static word embeddings. Our error analysis identifies core challenges for slang representation learning, including out-of-vocabulary words, polysemy, variance, and annotation disagreements, which can be traced to characteristics of slang as a quickly evolving and highly subjective language.
translated by 谷歌翻译
近年来,在网上见证了令人反感的内容的泛滥,例如假新闻,宣传,错误信息和虚假信息。虽然最初这主要是关于文本内容,但随着时间的流逝,图像和视频越来越受欢迎,因为它们更容易消费,吸引更多的关注并比文本更广泛地传播。结果,研究人员开始利用不同的方式及其组合来解决在线多模式进攻内容。在这项研究中,我们提供了有关最新的多模式虚假信息检测的调查,该检测涵盖了各种模式组合:文本,图像,语音,视频,社交媒体网络结构和时间信息。此外,尽管有些研究集中于事实,但其他研究调查了内容的有害性。尽管虚假信息定义中的这两个组成部分(i)事实和(ii)有害性同样重要,但通常会孤立地研究它们。因此,我们主张在同一框架中考虑多种方式以及事实和有害性来解决虚假信息检测。最后,我们讨论当前的挑战和未来的研究方向
translated by 谷歌翻译
了解文本中表达的态度,也称为姿态检测,在旨在在线检测虚假信息的系统中起重要作用,无论是错误信息(无意的假)或虚假信息(故意错误地蔓延,恶意意图)。姿态检测已经以不同的方式在文献中框架,包括(a)作为事实检查,谣言检测和检测先前的事实检查的权利要求,或(b)作为其自己的任务的组件;在这里,我们看看两者。虽然已经进行了与其他相关任务的突出姿态检测,但诸如论证挖掘和情绪分析之类的其他相关任务,但没有调查姿态检测和错误和缺陷检测之间的关系。在这里,我们的目标是弥合这个差距。特别是,我们在焦点中审查和分析了该领域的现有工作,焦点中的错误和不忠实,然后我们讨论了汲取的经验教训和未来的挑战。
translated by 谷歌翻译
The dissemination of hateful memes online has adverse effects on social media platforms and the real world. Detecting hateful memes is challenging, one of the reasons being the evolutionary nature of memes; new hateful memes can emerge by fusing hateful connotations with other cultural ideas or symbols. In this paper, we propose a framework that leverages multimodal contrastive learning models, in particular OpenAI's CLIP, to identify targets of hateful content and systematically investigate the evolution of hateful memes. We find that semantic regularities exist in CLIP-generated embeddings that describe semantic relationships within the same modality (images) or across modalities (images and text). Leveraging this property, we study how hateful memes are created by combining visual elements from multiple images or fusing textual information with a hateful image. We demonstrate the capabilities of our framework for analyzing the evolution of hateful memes by focusing on antisemitic memes, particularly the Happy Merchant meme. Using our framework on a dataset extracted from 4chan, we find 3.3K variants of the Happy Merchant meme, with some linked to specific countries, persons, or organizations. We envision that our framework can be used to aid human moderators by flagging new variants of hateful memes so that moderators can manually verify them and mitigate the problem of hateful content online.
translated by 谷歌翻译
在这项工作中,提出了两种机器学习方法的整合,即适应和可解释的AI,以解决这两个广义检测和解释性的问题。首先,域名对抗神经网络(DANN)在多个社交媒体平台上开发了广义的错误信息检测器,DANN用于为具有相关但看不见的数据的测试域生成分类结果。基于DANN的模型是一种传统的黑盒模型,无法证明其结果合理,即目标域的标签。因此,应用了可解释的局部模型 - 反应解释(LIME)可解释的AI模型来解释DANN模式的结果。为了证明这两种方法及其进行有效解释的广义检测的整合,Covid-19的错误信息被认为是案例研究。我们尝试了两个数据集,分别是CoAid和Misovac,并比较了有或没有DANN实施的结果。 Dann显着提高了精度测量F1分类评分,并提高了准确性和AUC性能。获得的结果表明,所提出的框架在域移动的情况下表现良好,可以学习域名特征,同时使用石灰实现解释目标标签,从而实现可信赖的信息处理和提取,从而有效地打击错误信息。
translated by 谷歌翻译
本次调查绘制了用于分析社交媒体数据的生成方法的研究状态的广泛的全景照片(Sota)。它填补了空白,因为现有的调查文章在其范围内或被约会。我们包括两个重要方面,目前正在挖掘和建模社交媒体的重要性:动态和网络。社会动态对于了解影响影响或疾病的传播,友谊的形成,友谊的形成等,另一方面,可以捕获各种复杂关系,提供额外的洞察力和识别否则将不会被注意的重要模式。
translated by 谷歌翻译
随着社交媒体平台从基于文本的论坛发展为多模式环境,社交媒体中错误信息的性质也正在发生相应的变化。利用这样一个事实,即图像和视频等视觉方式对用户更有利和吸引力,并且有时会毫不粗糙地浏览文本内容,否则传播器最近针对模式之间的上下文相关性,例如文本和图像。因此,许多研究工作已经发展为自动技术,用于检测基于Web的媒体中可能的跨模式不一致。在这项工作中,我们旨在分析,分类和确定现有方法,除了面临的挑战和缺点外,还要在多模式错误信息检测领域中发掘新的机会。
translated by 谷歌翻译
Social media has been one of the main information consumption sources for the public, allowing people to seek and spread information more quickly and easily. However, the rise of various social media platforms also enables the proliferation of online misinformation. In particular, misinformation in the health domain has significant impacts on our society such as the COVID-19 infodemic. Therefore, health misinformation in social media has become an emerging research direction that attracts increasing attention from researchers of different disciplines. Compared to misinformation in other domains, the key differences of health misinformation include the potential of causing actual harm to humans' bodies and even lives, the hardness to identify for normal people, and the deep connection with medical science. In addition, health misinformation on social media has distinct characteristics from conventional channels such as television on multiple dimensions including the generation, dissemination, and consumption paradigms. Because of the uniqueness and importance of combating health misinformation in social media, we conduct this survey to further facilitate interdisciplinary research on this problem. In this survey, we present a comprehensive review of existing research about online health misinformation in different disciplines. Furthermore, we also systematically organize the related literature from three perspectives: characterization, detection, and intervention. Lastly, we conduct a deep discussion on the pressing open issues of combating health misinformation in social media and provide future directions for multidisciplinary researchers.
translated by 谷歌翻译
自社交媒体使用的扩散以来,仇恨言论已成为一个主要的危机。可恶的内容可以迅速传播并造成痛苦和敌意的环境。此外,可以被视为仇恨是语境的,随着时间的推移而变化。虽然在线仇恨言论减少了已经自由地参与讨论的边缘化群体的能力,但离线仇恨言论导致仇恨犯罪和暴力对抗个人和社区。仇恨言论的多方面性质及其真实影响已经激起了数据挖掘和机器学习社区的兴趣。尽管我们努力最大,但仇恨致辞仍然是研究人员和从业者的避免问题。本文介绍了阻碍建立自动化仇恨缓解系统的方法论挑战。这些挑战激发了我们在打击网络上仇恨内容的更广泛领域的工作。我们讨论了一系列拟议的解决方案,以限制仇恨言论在社交媒体上的传播。
translated by 谷歌翻译
State-of-the-art computer vision systems are trained to predict a fixed set of predetermined object categories. This restricted form of supervision limits their generality and usability since additional labeled data is needed to specify any other visual concept. Learning directly from raw text about images is a promising alternative which leverages a much broader source of supervision. We demonstrate that the simple pre-training task of predicting which caption goes with which image is an efficient and scalable way to learn SOTA image representations from scratch on a dataset of 400 million (image, text) pairs collected from the internet. After pre-training, natural language is used to reference learned visual concepts (or describe new ones) enabling zero-shot transfer of the model to downstream tasks. We study the performance of this approach by benchmarking on over 30 different existing computer vision datasets, spanning tasks such as OCR, action recognition in videos, geo-localization, and many types of fine-grained object classification. The model transfers non-trivially to most tasks and is often competitive with a fully supervised baseline without the need for any dataset specific training. For instance, we match the accuracy of the original ResNet-50 on ImageNet zero-shot without needing to use any of the 1.28 million training examples it was trained on. We release our code and pre-trained model weights at https://github.com/OpenAI/CLIP.
translated by 谷歌翻译
讽刺可以被定义为说或写讽刺与一个人真正想表达的相反,通常是为了侮辱,刺激或娱乐某人。由于文本数据中讽刺性的性质晦涩难懂,因此检测到情感分析研究社区的困难和非常感兴趣。尽管讽刺检测的研究跨越了十多年,但最近已经取得了一些重大进步,包括在多模式环境中采用了无监督的预训练的预训练的变压器,并整合了环境以识别讽刺。在这项研究中,我们旨在简要概述英语计算讽刺研究的最新进步和趋势。我们描述了与讽刺有关的相关数据集,方法,趋势,问题,挑战和任务,这些数据集,趋势,问题,挑战和任务是无法检测到的。我们的研究提供了讽刺数据集,讽刺特征及其提取方法以及各种方法的性能分析,这些表可以帮助相关领域的研究人员了解当前的讽刺检测中最新实践。
translated by 谷歌翻译
由于信息和错误信息都可以在现代媒体生态系统中传播的速度,事实检查变得越来越重要。因此,研究人员一直在探索如何自动检查,使用基于自然语言处理,机器学习,知识表示以及数据库来自动检查的技术,以自动预测所称的索赔的真实性。在本文中,我们从自然语言处理中调查了自动检查源,并讨论其与相关任务和学科的联系。在此过程中,我们概述了现有数据集和模型,旨在统一给出的各种定义和识别共同概念。最后,我们突出了未来研究的挑战。
translated by 谷歌翻译
信息通过社交媒体平台的传播可以创造可能对弱势社区的环境和社会中某些群体的沉默。为了减轻此类情况,已经开发了几种模型来检测仇恨和冒犯性言论。由于在社交媒体平台中检测仇恨和冒犯性演讲可能会错误地将个人排除在社交媒体平台之外,从而减少信任,因此有必要创建可解释和可解释的模型。因此,我们基于在Twitter数据上培训的XGBOOST算法建立了一个可解释且可解释的高性能模型。对于不平衡的Twitter数据,XGBoost在仇恨言语检测上的表现优于LSTM,Autogluon和ULMFIT模型,F1得分为0.75,而0.38和0.37分别为0.37和0.38。当我们将数据放到三个单独的类别的大约5000个推文中时,XGBoost的性能优于LSTM,Autogluon和Ulmfit;仇恨言语检测的F1分别为0.79和0.69、0.77和0.66。 XGBOOST在下采样版本中的进攻性语音检测中的F1得分分别为0.83和0.88、0.82和0.79,XGBOOST的表现也比LSTM,Autogluon和Ulmfit更好。我们在XGBoost模型的输出上使用Shapley添加说明(SHAP),以使其与Black-Box模型相比,与LSTM,Autogluon和Ulmfit相比,它可以解释和解释。
translated by 谷歌翻译
假新闻的迅速增加,这对社会造成重大损害,触发了许多假新闻相关研究,包括开发假新闻检测和事实验证技术。这些研究的资源主要是从Web数据中获取的公共数据集。我们通过三个观点调查了与假新闻研究相关的118个数据集:(1)假新闻检测,(2)事实验证,(3)其他任务;例如,假新闻和讽刺检测分析。我们还详细描述了他们的利用任务及其特征。最后,我们突出了假新闻数据集建设中的挑战以及解决这些挑战的一些研究机会。我们的调查通过帮助研究人员找到合适的数据集来促进假新闻研究,而无需重新发明轮子,从而提高了深度的假新闻研究。
translated by 谷歌翻译
鉴于社交媒体消费的增加,估计社交媒体使用者的政治倾向是一个具有挑战性且越来越紧迫的问题。我们介绍了retweet-bert,这是一个简单且可扩展的模型,以估算Twitter用户的政治倾向。 retweet-bert利用转发网络结构和用户配置文件描述中使用的语言。我们的假设源于具有类似意识形态的人的网络和语言学的模式。 retweet-bert表现出对其他最先进的基线的竞争性能,在最近的两个Twitter数据集(COVID-19数据集和2020年美国总统选举数据集)中,达到96%-97%的宏观F1。我们还执行手动验证,以验证培训数据中不在培训数据中的用户的retweet-bert的性能。最后,在Covid-19的案例研究中,我们说明了Twitter上政治回声室的存在,并表明它主要存在于正确的倾斜用户中。我们的代码是开源的,我们的数据已公开可用。
translated by 谷歌翻译
我们研究了检查问题的事实,旨在识别给定索赔的真实性。具体而言,我们专注于事实提取和验证(发烧)及其伴随数据集的任务。该任务包括从维基百科检索相关文件(和句子)并验证文件中的信息是否支持或驳斥所索赔的索赔。此任务至关重要,可以是假新闻检测和医疗索赔验证等应用程序块。在本文中,我们以通过以结构化和全面的方式呈现文献来更好地了解任务的挑战。我们通过分析不同方法的技术视角并讨论发热数据集的性能结果,描述了所提出的方法,这是最熟悉的和正式结构化的数据集,就是事实提取和验证任务。我们还迄今为止迄今为止确定句子检索组件的有益损失函数的最大实验研究。我们的分析表明,采样负句对于提高性能并降低计算复杂性很重要。最后,我们描述了开放的问题和未来的挑战,我们激励了未来的任务研究。
translated by 谷歌翻译
使用文本,图像,音频,视频等多种方式的多模式深度学习系统,与单独的方式(即单向)系统相比,显示出更好的性能。多式联机学习涉及多个方面:表示,翻译,对齐,融合和共同学习。在当前多式联机学习状态下,假设是在训练和测试时间期间存在,对齐和无噪声。然而,在现实世界的任务中,通常,观察到一个或多个模式丢失,嘈杂,缺乏注释数据,具有不可靠的标签,并且在训练或测试中稀缺,或两者都稀缺。这种挑战是由称为多式联合学习的学习范例解决的。通过使用模态之间的知识传输,包括其表示和预测模型,通过从另一个(资源丰富的)方式利用来自另一(资源丰富的)模型的知识来帮助实现(资源差)模型的建模。共同学习是一个新兴地区,没有专注的评论,明确地关注共同学习所解决的所有挑战。为此,在这项工作中,我们对新兴的多式联合学习领域提供了全面的调查,尚未完整探讨。我们审查实施的实施,以克服一个或多个共同学习挑战,而不明确地将它们视为共同学习挑战。我们基于共同学习和相关实施解决的挑战,展示了多式联合学习的综合分类。用于包括最新的技术与一些应用程序和数据集一起审查。我们的最终目标是讨论挑战和观点以及未来工作的重要思想和方向,我们希望对整个研究界的有益,重点关注这一令人兴奋的领域。
translated by 谷歌翻译
构建用于仇恨语音检测的基准数据集具有各种挑战。首先,因为仇恨的言论相对少见,随机抽样对诠释的推文是非常效率的发现仇恨。为了解决此问题,先前的数据集通常仅包含匹配已知的“讨厌字”的推文。然而,将数据限制为预定义的词汇表可能排除我们寻求模型的现实世界现象的部分。第二个挑战是仇恨言论的定义往往是高度不同和主观的。具有多种讨论仇恨言论的注释者可能不仅可能不同意彼此不同意,而且还努力符合指定的标签指南。我们的重点识别是仇恨语音的罕见和主体性类似于信息检索(IR)中的相关性。此连接表明,可以有效地应用创建IR测试集合的良好方法,以创建更好的基准数据集以进行仇恨语音。为了智能和有效地选择要注释的推文,我们应用{\ em汇集}和{em主动学习}的标准IR技术。为了提高注释的一致性和价值,我们应用{\ EM任务分解}和{\ EM注释器理由}技术。我们在Twitter上共享一个用于仇恨语音检测的新基准数据集,其提供比以前的数据集更广泛的仇恨覆盖。在这些更广泛形式的仇恨中测试时,我们还表现出现有检测模型的准确性的戏剧性降低。注册器理由我们不仅可以证明标签决策证明,而且还可以在建模中实现未来的双重监督和/或解释生成的工作机会。我们的方法的进一步细节可以在补充材料中找到。
translated by 谷歌翻译
Abusive language is a concerning problem in online social media. Past research on detecting abusive language covers different platforms, languages, demographies, etc. However, models trained using these datasets do not perform well in cross-domain evaluation settings. To overcome this, a common strategy is to use a few samples from the target domain to train models to get better performance in that domain (cross-domain few-shot training). However, this might cause the models to overfit the artefacts of those samples. A compelling solution could be to guide the models toward rationales, i.e., spans of text that justify the text's label. This method has been found to improve model performance in the in-domain setting across various NLP tasks. In this paper, we propose RAFT (Rationale Adaptor for Few-shoT classification) for abusive language detection. We first build a multitask learning setup to jointly learn rationales, targets, and labels, and find a significant improvement of 6% macro F1 on the rationale detection task over training solely rationale classifiers. We introduce two rationale-integrated BERT-based architectures (the RAFT models) and evaluate our systems over five different abusive language datasets, finding that in the few-shot classification setting, RAFT-based models outperform baseline models by about 7% in macro F1 scores and perform competitively to models finetuned on other source domains. Furthermore, RAFT-based models outperform LIME/SHAP-based approaches in terms of plausibility and are close in performance in terms of faithfulness.
translated by 谷歌翻译