尽管政府的信息运动和谁努力,但Covid-19疫苗犹豫不决是广泛的。其背后的原因之一是疫苗虚假信息在社交媒体中广泛传播。特别是,最近的调查确定,疫苗的虚假信息正在影响COVID-19-19疫苗接种的负面信任。同时,由于大规模的社交媒体,事实检查者正在努力检测和跟踪疫苗虚假信息。为了帮助事实检查员在线监视疫苗叙事,本文研究了一项新的疫苗叙事分类任务,该任务将Covid-19疫苗主张的疫苗索赔分为七个类别之一。遵循数据增强方法,我们首先为这项新的分类任务构建了一个新颖的数据集,重点是少数群体。我们还利用事实检查器注释的数据。该论文还提出了神经疫苗叙事分类器,在交叉验证下达到84%的精度。分类器可公开用于研究人员和记者。
translated by 谷歌翻译
自2020年初以来,Covid-19-19造成了全球重大影响。这给社会带来了很多困惑,尤其是由于错误信息通过社交媒体传播。尽管已经有几项与在社交媒体数据中发现错误信息有关的研究,但大多数研究都集中在英语数据集上。印度尼西亚的COVID-19错误信息检测的研究仍然很少。因此,通过这项研究,我们收集和注释印尼语的数据集,并通过考虑该推文的相关性来构建用于检测COVID-19错误信息的预测模型。数据集构造是由一组注释者进行的,他们标记了推文数据的相关性和错误信息。在这项研究中,我们使用印度培训预培训的语言模型提出了两阶段分类器模型,以进行推文错误信息检测任务。我们还尝试了其他几种基线模型进行文本分类。实验结果表明,对于相关性预测,BERT序列分类器的组合和用于错误信息检测的BI-LSTM的组合优于其他机器学习模型,精度为87.02%。总体而言,BERT利用率有助于大多数预测模型的更高性能。我们发布了高质量的Covid-19错误信息推文语料库,用高通道一致性表示。
translated by 谷歌翻译
对社交媒体上的COVID-19疫苗接种的公众讨论不仅对于解决当前的Covid-19-19大流行,而且对于未来的病原体爆发而言至关重要。我们检查了一个Twitter数据集,其中包含7500万英文推文,讨论2020年3月至2021年3月的Covid-19疫苗接种。我们使用自然语言处理(NLP)技术培训了一种立场检测算法,以将推文分为“反Vax”或“ pro-Vax”或“ Pro-Vax” ',并使用主题建模技术检查话语的主要主题。虽然Pro-Vax推文(3700万)远远超过反VAX推文(1000万),但两种姿态的大多数推文(63%的反VAX和53%的Pro-Vax推文)都来自双稳定的用户,他们都发布了两者在观察期间,亲和反VAX推文。 Pro-Vax推文主要集中在疫苗开发上,而反VAX推文则涵盖了广泛的主题,其中一些主题包括真正的问题,尽管存在很大的虚假性。尽管从相反的角度讨论了这两个立场,但两种立场都是常见的。模因和笑话是最转推消息之一。尽管对反vax话语的两极分化和在线流行的担忧是毫无根据的,但针对虚假的有针对性的反驳很重要。
translated by 谷歌翻译
Convincing people to get vaccinated against COVID-19 is a key societal challenge in the present times. As a first step towards this goal, many prior works have relied on social media analysis to understand the specific concerns that people have towards these vaccines, such as potential side-effects, ineffectiveness, political factors, and so on. Though there are datasets that broadly classify social media posts into Anti-vax and Pro-Vax labels, there is no dataset (to our knowledge) that labels social media posts according to the specific anti-vaccine concerns mentioned in the posts. In this paper, we have curated CAVES, the first large-scale dataset containing about 10k COVID-19 anti-vaccine tweets labelled into various specific anti-vaccine concerns in a multi-label setting. This is also the first multi-label classification dataset that provides explanations for each of the labels. Additionally, the dataset also provides class-wise summaries of all the tweets. We also perform preliminary experiments on the dataset and show that this is a very challenging dataset for multi-label explainable classification and tweet summarization, as is evident by the moderate scores achieved by some state-of-the-art models. Our dataset and codes are available at: https://github.com/sohampoddar26/caves-data
translated by 谷歌翻译
社交媒体平台主持了有关每天出现的各种主题的讨论。理解所有内容并将其组织成类别是一项艰巨的任务。处理此问题的一种常见方法是依靠主题建模,但是使用此技术发现的主题很难解释,并且从语料库到语料库可能会有所不同。在本文中,我们提出了基于推文主题分类的新任务,并发布两个相关的数据集。鉴于涵盖社交媒体中最重要的讨论点的广泛主题,我们提供了最近时间段的培训和测试数据,可用于评估推文分类模型。此外,我们在任务上对当前的通用和领域特定语言模型进行定量评估和分析,这为任务的挑战和性质提供了更多见解。
translated by 谷歌翻译
超越种族主义文本的二元分类,我们的研究从社会科学理论中获取线索,以开发一种用于种族主义检测的多维模型,即污名化,进攻性,责备和排斥。在BERT和主题建模的帮助下,这种分类检测可以洞悉Covid-19期间数字平台上种族主义讨论的基本细节。我们的研究有助于丰富有关社交媒体上种族主义行为的学术讨论。首先,采用阶段分析来捕捉在Covid-19的早期阶段的主题变化的动态,该阶段从国内流行病转变为国际公共卫生紧急情况,后来转变为全球大流行。此外,映射这一趋势可以更准确地预测有关离线世界中种族主义的公众舆论发展,同时,制定了规定的干预策略,以打击像Covid-19这样的全球公共卫生危机期间的种族主义兴起。此外,这项跨学科研究还指出了关于社交网络分析和采矿的未来研究的方向。将社会科学观点整合到计算方法的发展中,为更准确的数据检测和分析提供了见解。
translated by 谷歌翻译
自Covid-19大流行病开始以来,疫苗一直是公共话语中的重要话题。疫苗周围的讨论被两极分化,因为有些人认为它们是结束大流行的重要措施,而另一些人则犹豫不决或发现它们有害。这项研究调查了与Twitter上的Covid-19疫苗有关的帖子,并着重于对疫苗有负姿态的帖子。收集了与COVID-19疫苗相关的16,713,238个英文推文的数据集,收集了涵盖从2020年3月1日至2021年7月31日的该期间。我们使用Scikit-Learn Python库来应用支持向量机(SVM)分类器针对Covid-19疫苗的推文具有负姿态。总共使用了5,163个推文来训练分类器,其中有2,484个推文由我们手动注释并公开提供。我们使用Berttopic模型来提取和调查负推文中讨论的主题以及它们如何随时间变化。我们表明,随着疫苗的推出,对COVID-19疫苗的负面影响随时间而下降。我们确定了37个讨论主题,并随着时间的推移介绍了各自的重要性。我们表明,流行的主题包括阴谋讨论,例如5G塔和微芯片,但还涉及涉及疫苗接种安全性和副作用以及对政策的担忧。我们的研究表明,即使是不受欢迎的观点或阴谋论,与广受欢迎的讨论主题(例如Covid-19疫苗)配对时,也会变得广泛。了解问题和讨论的主题以及它们如何随着时间的变化对于政策制定者和公共卫生当局提供更好和时间的信息和政策,以促进未来类似危机的人口接种。
translated by 谷歌翻译
最近受到在线叙述驱动的疫苗犹豫会大大降低了疫苗接种策略的功效,例如Covid-19。尽管医学界对可用疫苗的安全性和有效性达成了广泛的共识,但许多社交媒体使用者仍被有关疫苗的虚假信息淹没,并且柔和或不愿意接种疫苗。这项研究的目的是通过开发能够自动识别负责传播反疫苗叙事的用户的系统来更好地理解反疫苗情绪。我们引入了一个公开可用的Python软件包,能够分析Twitter配置文件,以评估该个人资料将来分享反疫苗情绪的可能性。该软件包是使用文本嵌入方法,神经网络和自动数据集生成的,并接受了数百万条推文培训。我们发现,该模型可以准确地检测出抗疫苗用户,直到他们推文抗Vaccine主题标签或关键字。我们还展示了文本分析如何通过检测Twitter和常规用户之间的抗疫苗传播器之间的道德和情感差异来帮助我们理解反疫苗讨论的示例。我们的结果将帮助研究人员和政策制定者了解用户如何成为反疫苗感以及他们在Twitter上讨论的内容。政策制定者可以利用此信息进行更好的针对性的运动,以揭露有害的反疫苗接种神话。
translated by 谷歌翻译
Following the outbreak of a global pandemic, online content is filled with hate speech. Donald Trump's ''Chinese Virus'' tweet shifted the blame for the spread of the Covid-19 virus to China and the Chinese people, which triggered a new round of anti-China hate both online and offline. This research intends to examine China-related hate speech on Twitter during the two years following the burst of the pandemic (2020 and 2021). Through Twitter's API, in total 2,172,333 tweets hashtagged #china posted during the time were collected. By employing multiple state-of-the-art pretrained language models for hate speech detection, we identify a wide range of hate of various types, resulting in an automatically labeled anti-China hate speech dataset. We identify a hateful rate in #china tweets of 2.5% in 2020 and 1.9% in 2021. This is well above the average rate of online hate speech on Twitter at 0.6% identified in Gao et al., 2017. We further analyzed the longitudinal development of #china tweets and those identified as hateful in 2020 and 2021 through visualizing the daily number and hate rate over the two years. Our keyword analysis of hate speech in #china tweets reveals the most frequently mentioned terms in the hateful #china tweets, which can be used for further social science studies.
translated by 谷歌翻译
Covid-19已遍布全球,已经开发了几种疫苗来应对其激增。为了确定与社交媒体帖子中与疫苗相关的正确情感,我们在与Covid-19疫苗相关的推文上微调了各种最新的预训练的变压器模型。具体而言,我们使用最近引入的最先进的预训练的变压器模型Roberta,XLNet和Bert,以及在CoVID-19的推文中预先训练的域特异性变压器模型CT-Bert和Bertweet。我们通过使用基于语言模型的过采样技术(LMOTE)过采样来进一步探索文本扩展的选项,以改善这些模型的准确性,特别是对于小样本数据集,在正面,负面和中性情感类别之间存在不平衡的类别分布。我们的结果总结了我们关于用于微调最先进的预训练的变压器模型的不平衡小样本数据集的文本过采样的适用性,以及针对分类任务的域特异性变压器模型的实用性。
translated by 谷歌翻译
假新闻的迅速增加,这对社会造成重大损害,触发了许多假新闻相关研究,包括开发假新闻检测和事实验证技术。这些研究的资源主要是从Web数据中获取的公共数据集。我们通过三个观点调查了与假新闻研究相关的118个数据集:(1)假新闻检测,(2)事实验证,(3)其他任务;例如,假新闻和讽刺检测分析。我们还详细描述了他们的利用任务及其特征。最后,我们突出了假新闻数据集建设中的挑战以及解决这些挑战的一些研究机会。我们的调查通过帮助研究人员找到合适的数据集来促进假新闻研究,而无需重新发明轮子,从而提高了深度的假新闻研究。
translated by 谷歌翻译
为了解决疫苗犹豫不决,这会损害COVID-19疫苗接种运动的努力,必须了解公共疫苗接种态度并及时掌握其变化。尽管具有可靠性和可信赖性,但基于调查的传统态度收集是耗时且昂贵的,无法遵循疫苗接种态度的快速发展。我们利用社交媒体上的文本帖子通过提出深入学习框架来实时提取和跟踪用户的疫苗接种立场。为了解决与疫苗相关话语中常用的讽刺和讽刺性的语言特征的影响,我们将用户社交网络邻居的最新帖子集成到框架中,以帮助检测用户的真实态度。根据我们从Twitter的注释数据集,与最新的仅文本模型相比,从我们框架实例化的模型可以提高态度提取的性能高达23%。使用此框架,我们成功地验证了使用社交媒体跟踪现实生活中疫苗接种态度的演变的可行性。我们进一步显示了对我们的框架的一种实际用途,它可以通过从社交媒体中感知到的信息来预测用户疫苗犹豫的变化的可能性。
translated by 谷歌翻译
随着19日的流行,对亚洲人,尤其是中国人的仇恨正在加剧。迫切需要有效地检测并防止对亚洲人的仇恨言论。在这项工作中,我们首先创建了Covid Hate-2022,这是一个带注释的数据集,其中包括2022年2月上旬提取的2,025条带注释的推文,根据特定标准进行了标签,我们介绍了仇恨和非讨厌推文的全面收集数据集。其次,我们根据相关数据集微调BERT模型,并演示与推文“清洁”有关的几种策略。第三,我们以各种以模型为中心和以数据为中心的方法调查了高级微调策略的性能,并且我们表明,这两种策略通常都改善了性能,而以数据为中心的策略则胜过其他策略,并且证明了可行性和有效性相关任务中以数据为中心的方法。
translated by 谷歌翻译
Covid-19在大流行的不同阶段对公众构成了不成比例的心理健康后果。我们使用一种计算方法来捕获引发在线社区对大流行的焦虑的特定方面,并研究这些方面如何随时间变化。首先,我们使用主题分析在R/covid19 \ _support的Reddit帖子样本($ n $ = 86)中确定了九个焦虑(SOA)。然后,我们通过在手动注释的样本($ n $ = 793)上训练Reddit用户的焦虑来自动将SOA标记在较大的年代样本中($ n $ = 6,535)。 9个SOA与最近开发的大流行焦虑测量量表中的项目保持一致。我们观察到,在大流行的前八个月,Reddit用户对健康风险的担忧仍然很高。尽管案件激增稍后发生,但这些担忧却大大减少了。通常,随着大流行的进展,用户的语言披露了SOA的强烈强度。但是,在本研究涵盖的整个期间,人们对心理健康的担忧和未来稳步增长。人们还倾向于使用更强烈的语言来描述心理健康问题,而不是健康风险或死亡问题。我们的结果表明,尽管Covid-19逐渐削弱,但由于适当的对策而逐渐削弱了作为健康威胁,但该在线小组的心理健康状况并不一定会改善。我们的系统为人口健康和流行病学学者奠定了基础,以及时检查引起大流行焦虑的方面。
translated by 谷歌翻译
尽管Covid-19疫苗对病毒取得了惊人的成功,但很大一部分人口仍然不愿接受疫苗接种,这破坏了政府控制该病毒的努力。为了解决这个问题,我们需要了解导致这种行为的不同因素,包括社交媒体话语,新闻媒体宣传,政府的回应,人口统计和社会经济地位以及COVID-19统计等等。涵盖所有这些方面,使得在推断疫苗犹豫的问题时很难形成完整的情况。在本文中,我们构建了一个多源,多模式和多功能在线数据存储库Covaxnet。我们提供描述性分析和见解,以说明Covaxnet中的关键模式。此外,我们提出了一种新颖的方法来连接在线和离线数据,以促进利用互补信息源的推理任务。
translated by 谷歌翻译
在社交媒体上分享了反疫苗职位,包括误导性帖子,并展示了在疫苗中产生混淆并减少了公众信心,导致疫苗犹豫不决。近年来目睹了在网上网络中各种语言和视觉形态的这种反疫苗柱的快速崛起,对有效内容适度和跟踪构成了巨大挑战。在利用文本信息上扩展了以前的工作以了解疫苗信息,本文介绍了INSTA-VAX,这是一个新的多模态数据集,包括与人类疫苗相关的64,957件Instagram帖子的样本。我们应用了两个培训的专家法官验证的众群注释程序到此数据集。然后,我们将几个最先进的NLP和计算机视觉分类器标记为检测帖子是否显示出反疫苗态度以及它们是否包含错误信息。广泛的实验和分析证明了多模式模型可以比单模模型更准确地将帖子分类,但仍需要改进,特别是在视觉情绪理解和外部知识合作。数据集和分类机有助于监测和跟踪疫苗讨论的社会科学和公共卫生努力,在打击疫苗错误信息问题。
translated by 谷歌翻译
Migraine is a high-prevalence and disabling neurological disorder. However, information migraine management in real-world settings could be limited to traditional health information sources. In this paper, we (i) verify that there is substantial migraine-related chatter available on social media (Twitter and Reddit), self-reported by migraine sufferers; (ii) develop a platform-independent text classification system for automatically detecting self-reported migraine-related posts, and (iii) conduct analyses of the self-reported posts to assess the utility of social media for studying this problem. We manually annotated 5750 Twitter posts and 302 Reddit posts. Our system achieved an F1 score of 0.90 on Twitter and 0.93 on Reddit. Analysis of information posted by our 'migraine cohort' revealed the presence of a plethora of relevant information about migraine therapies and patient sentiments associated with them. Our study forms the foundation for conducting an in-depth analysis of migraine-related information using social media data.
translated by 谷歌翻译
Current research on users` perspectives of cyber security and privacy related to traditional and smart devices at home is very active, but the focus is often more on specific modern devices such as mobile and smart IoT devices in a home context. In addition, most were based on smaller-scale empirical studies such as online surveys and interviews. We endeavour to fill these research gaps by conducting a larger-scale study based on a real-world dataset of 413,985 tweets posted by non-expert users on Twitter in six months of three consecutive years (January and February in 2019, 2020 and 2021). Two machine learning-based classifiers were developed to identify the 413,985 tweets. We analysed this dataset to understand non-expert users` cyber security and privacy perspectives, including the yearly trend and the impact of the COVID-19 pandemic. We applied topic modelling, sentiment analysis and qualitative analysis of selected tweets in the dataset, leading to various interesting findings. For instance, we observed a 54% increase in non-expert users` tweets on cyber security and/or privacy related topics in 2021, compared to before the start of global COVID-19 lockdowns (January 2019 to February 2020). We also observed an increased level of help-seeking tweets during the COVID-19 pandemic. Our analysis revealed a diverse range of topics discussed by non-expert users across the three years, including VPNs, Wi-Fi, smartphones, laptops, smart home devices, financial security, and security and privacy issues involving different stakeholders. Overall negative sentiment was observed across almost all topics non-expert users discussed on Twitter in all the three years. Our results confirm the multi-faceted nature of non-expert users` perspectives on cyber security and privacy and call for more holistic, comprehensive and nuanced research on different facets of such perspectives.
translated by 谷歌翻译
In this paper, we present a study of regret and its expression on social media platforms. Specifically, we present a novel dataset of Reddit texts that have been classified into three classes: Regret by Action, Regret by Inaction, and No Regret. We then use this dataset to investigate the language used to express regret on Reddit and to identify the domains of text that are most commonly associated with regret. Our findings show that Reddit users are most likely to express regret for past actions, particularly in the domain of relationships. We also found that deep learning models using GloVe embedding outperformed other models in all experiments, indicating the effectiveness of GloVe for representing the meaning and context of words in the domain of regret. Overall, our study provides valuable insights into the nature and prevalence of regret on social media, as well as the potential of deep learning and word embeddings for analyzing and understanding emotional language in online text. These findings have implications for the development of natural language processing algorithms and the design of social media platforms that support emotional expression and communication.
translated by 谷歌翻译
The widespread of offensive content online, such as hate speech and cyber-bullying, is a global phenomenon. This has sparked interest in the artificial intelligence (AI) and natural language processing (NLP) communities, motivating the development of various systems trained to detect potentially harmful content automatically. These systems require annotated datasets to train the machine learning (ML) models. However, with a few notable exceptions, most datasets on this topic have dealt with English and a few other high-resource languages. As a result, the research in offensive language identification has been limited to these languages. This paper addresses this gap by tackling offensive language identification in Sinhala, a low-resource Indo-Aryan language spoken by over 17 million people in Sri Lanka. We introduce the Sinhala Offensive Language Dataset (SOLD) and present multiple experiments on this dataset. SOLD is a manually annotated dataset containing 10,000 posts from Twitter annotated as offensive and not offensive at both sentence-level and token-level, improving the explainability of the ML models. SOLD is the first large publicly available offensive language dataset compiled for Sinhala. We also introduce SemiSOLD, a larger dataset containing more than 145,000 Sinhala tweets, annotated following a semi-supervised approach.
translated by 谷歌翻译