Small to medium-scale data science experiments often rely on research software developed ad-hoc by individual scientists or small teams. Often there is no time to make the research software fast, reusable, and open access. The consequence is twofold. First, subsequent researchers must spend significant work hours building upon the proposed hypotheses or experimental framework. In the worst case, others cannot reproduce the experiment and reuse the findings for subsequent research. Second, suppose the ad-hoc research software fails during often long-running computationally expensive experiments. In that case, the overall effort to iteratively improve the software and rerun the experiments creates significant time pressure on the researchers. We suggest making caching an integral part of the research software development process, even before the first line of code is written. This article outlines caching recommendations for developing research software in data science projects. Our recommendations provide a perspective to circumvent common problems such as propriety dependence, speed, etc. At the same time, caching contributes to the reproducibility of experiments in the open science workflow. Concerning the four guiding principles, i.e., Findability, Accessibility, Interoperability, and Reusability (FAIR), we foresee that including the proposed recommendation in a research software development will make the data related to that software FAIRer for both machines and humans. We exhibit the usefulness of some of the proposed recommendations on our recently completed research software project in mathematical information retrieval.
translated by 谷歌翻译
Media has a substantial impact on the public perception of events. A one-sided or polarizing perspective on any topic is usually described as media bias. One of the ways how bias in news articles can be introduced is by altering word choice. Biased word choices are not always obvious, nor do they exhibit high context-dependency. Hence, detecting bias is often difficult. We propose a Transformer-based deep learning architecture trained via Multi-Task Learning using six bias-related data sets to tackle the media bias detection problem. Our best-performing implementation achieves a macro $F_{1}$ of 0.776, a performance boost of 3\% compared to our baseline, outperforming existing methods. Our results indicate Multi-Task Learning as a promising alternative to improve existing baseline models in identifying slanted reporting.
translated by 谷歌翻译
Despite the recent success of multi-task learning and pre-finetuning for natural language understanding, few works have studied the effects of task families on abstractive text summarization. Task families are a form of task grouping during the pre-finetuning stage to learn common skills, such as reading comprehension. To close this gap, we analyze the influence of multi-task learning strategies using task families for the English abstractive text summarization task. We group tasks into one of three strategies, i.e., sequential, simultaneous, and continual multi-task learning, and evaluate trained models through two downstream tasks. We find that certain combinations of task families (e.g., advanced reading comprehension and natural language inference) positively impact downstream performance. Further, we find that choice and combinations of task families influence downstream performance more than the training scheme, supporting the use of task families for abstractive text summarization.
translated by 谷歌翻译
The recent success of large language models for text generation poses a severe threat to academic integrity, as plagiarists can generate realistic paraphrases indistinguishable from original work. However, the role of large autoregressive transformers in generating machine-paraphrased plagiarism and their detection is still developing in the literature. This work explores T5 and GPT-3 for machine-paraphrase generation on scientific articles from arXiv, student theses, and Wikipedia. We evaluate the detection performance of six automated solutions and one commercial plagiarism detection software and perform a human study with 105 participants regarding their detection performance and the quality of generated examples. Our results suggest that large models can rewrite text humans have difficulty identifying as machine-paraphrased (53% mean acc.). Human experts rate the quality of paraphrases generated by GPT-3 as high as original texts (clarity 4.0/5, fluency 4.2/5, coherence 3.8/5). The best-performing detection model (GPT-3) achieves a 66% F1-score in detecting paraphrases.
translated by 谷歌翻译
媒体报道对公众对事件的看法具有重大影响。尽管如此,媒体媒体经常有偏见。偏见新闻文章的一种方法是改变选择一词。通过单词选择对偏见的自动识别是具有挑战性的,这主要是由于缺乏黄金标准数据集和高环境依赖性。本文介绍了Babe,这是由训练有素的专家创建的强大而多样化的数据集,用于媒体偏见研究。我们还分析了为什么专家标签在该域中至关重要。与现有工作相比,我们的数据集提供了更好的注释质量和更高的通知者协议。它由主题和插座之间平衡的3,700个句子组成,其中包含单词和句子级别上的媒体偏见标签。基于我们的数据,我们还引入了一种自动检测新闻文章中偏见的句子的方法。我们最佳性能基于BERT的模型是在由遥远标签组成的较大语料库中进行预训练的。对我们提出的监督数据集进行微调和评估模型,我们达到了0.804的宏F1得分,表现优于现有方法。
translated by 谷歌翻译
DBLP是计算机科学科学文章的最大开放访问存储库,并提供与出版物,作者和场地相关的元数据。我们从DBLP中检索了超过600万个出版物,并从出版物文本中提取了相关的元数据(例如摘要,作者分支机构,引用),以创建DBLP Discovery Dataset(D3)。 D3可用于确定计算机科学研究的研究活动,生产力,偏见,可及性和影响的趋势。我们提出了针对计算机科学研究量(例如论文,作者,研究活动的数量),感兴趣主题和引文模式的初步分析。我们的发现表明,计算机科学是一个不断增长的研究领域(每年约15%),拥有一个积极的协作研究员社区。与前几十年相比,近年来的论文提供了更多的书目条目,但引用的平均数量仍在下降。调查论文的摘要表明,最近的主题趋势在D3中明显反映。最后,我们列出了D3和提出补充研究问题的进一步应用。 D3数据集,我们的发现和源代码可公开用于研究目的。
translated by 谷歌翻译
当客观报告代替主观写作时,诸如百科全书和新闻文章的参考文本可以表现出偏见的语言。现有方法检测偏差主要依赖于带注释的数据来训练机器学习模型。但是,低注释员协议和可比性是可用媒体偏见Corpora的实质性缺点。为了评估数据收集选项,我们收集和比较从两个流行的众包平台获得的标签。我们的结果展示了现有的众包缺乏数据质量,强调了培训的专家框架的需要收集更可靠的数据集。通过创建此类框架并收集第一个数据集,我们能够将Krippendorff的$ \ Alpha $ = 0.144(众群标签)提升为$ \ Alpha $ = 0.419(专家标签)。我们得出结论,详细的注释培训提高了数据质量,提高了现有偏置检测系统的性能。我们将来继续扩展我们的数据集。
translated by 谷歌翻译
媒体覆盖范围对公众对事件的看法具有实质性影响。媒体框架事件的方式可以显着改变对社会的信仰和看法。尽管如此,众所周知,几乎所有媒体网点都以偏见的方式报告新闻。虽然可以通过改变单词选择或省略信息来引入这种偏差,但是偏差的感知也很大程度上取决于读者的个人背景。因此,媒体偏差是一个非常复杂的构造,用于识别和分析。尽管媒体偏见是许多研究的主题,但之前的评估策略过于简化,缺乏重叠和实证评估。因此,本研究旨在开发一种可以用作可靠标准来评估物品偏差的规模。为了命名一个例子:如果我们要问,打算衡量新闻文章中的偏见,“文章有多偏见?”或者我们应该改用,“文章是如何对待美国总统的?”。我们进行了文献搜索,以查找有关先前对该主题的文本看法的相关问题。在一个多迭代过程中,我们首先总结并缩小了这些问题,以结束关于偏见的完整和代表可能的问题类型。最终组由25个问题组成,答案格式不同,使用语义差异的17个问题,以及六个感受评级。我们在190条文章中测试了每个问题,总体上有663名参与者来确定问题衡量文章的感知偏见的程度。我们的研究结果表明,21项最终物品适合,可靠,以测量媒体偏差的看法。我们在http://bias -question-tree.gipplab.org/上发布最后一组问题。
translated by 谷歌翻译
我们提供了一个免费的和开源工具,用于创建基于Web的调查,包括文本注释任务。现有工具提供文本注释或调查功能,但并不是两者。结合两个输入类型对于调查读者对文本的看法特别相关,这也取决于读者的背景,例如年龄,性别和教育。我们的工具主要迎合了图书馆和信息科学,社会科学和人文学科对研究的研究人员的需求,他们将内容分析进行调查,例如媒体偏见,政治交流或假新闻。
translated by 谷歌翻译
倾斜的新闻报道,也称为媒体偏见,可以严重影响新闻消费者的解释和对新闻作出反应。要自动识别偏见语言,我们提出了一种比较相关词语的上下文的探索方法。我们训练两个嵌入模型,一个在左翼的文本上,另一个在右翼新闻网点上。我们的假设是,嵌入空格中的单词的表示与非偏见的单词比偏见的单词更相似。潜在的想法是,不同新闻网点中的偏置词的背景比非偏见的单词更强烈地变化,因为根据其上下文,偏置单词的感知是不同的。虽然我们没有发现统计学意义要接受假设,但结果表明了这种方法的有效性。例如,在单词嵌入空间的线性映射之后,31%的单词具有最大距离可能导致偏差。为了改善结果,我们发现数据集需要明显更大,我们将进一步的方法作为未来的研究方向推出。据我们所知,本文介绍了第一个深入看,通过Word Embeddings测量的偏置词语的背景。
translated by 谷歌翻译