Media has a substantial impact on the public perception of events. A one-sided or polarizing perspective on any topic is usually described as media bias. One of the ways how bias in news articles can be introduced is by altering word choice. Biased word choices are not always obvious, nor do they exhibit high context-dependency. Hence, detecting bias is often difficult. We propose a Transformer-based deep learning architecture trained via Multi-Task Learning using six bias-related data sets to tackle the media bias detection problem. Our best-performing implementation achieves a macro $F_{1}$ of 0.776, a performance boost of 3\% compared to our baseline, outperforming existing methods. Our results indicate Multi-Task Learning as a promising alternative to improve existing baseline models in identifying slanted reporting.
translated by 谷歌翻译
媒体报道对公众对事件的看法具有重大影响。尽管如此,媒体媒体经常有偏见。偏见新闻文章的一种方法是改变选择一词。通过单词选择对偏见的自动识别是具有挑战性的,这主要是由于缺乏黄金标准数据集和高环境依赖性。本文介绍了Babe,这是由训练有素的专家创建的强大而多样化的数据集,用于媒体偏见研究。我们还分析了为什么专家标签在该域中至关重要。与现有工作相比,我们的数据集提供了更好的注释质量和更高的通知者协议。它由主题和插座之间平衡的3,700个句子组成,其中包含单词和句子级别上的媒体偏见标签。基于我们的数据,我们还引入了一种自动检测新闻文章中偏见的句子的方法。我们最佳性能基于BERT的模型是在由遥远标签组成的较大语料库中进行预训练的。对我们提出的监督数据集进行微调和评估模型,我们达到了0.804的宏F1得分,表现优于现有方法。
translated by 谷歌翻译
Compressing neural network architectures is important to allow the deployment of models to embedded or mobile devices, and pruning and quantization are the major approaches to compress neural networks nowadays. Both methods benefit when compression parameters are selected specifically for each layer. Finding good combinations of compression parameters, so-called compression policies, is hard as the problem spans an exponentially large search space. Effective compression policies consider the influence of the specific hardware architecture on the used compression methods. We propose an algorithmic framework called Galen to search such policies using reinforcement learning utilizing pruning and quantization, thus providing automatic compression for neural networks. Contrary to other approaches we use inference latency measured on the target hardware device as an optimization goal. With that, the framework supports the compression of models specific to a given hardware target. We validate our approach using three different reinforcement learning agents for pruning, quantization and joint pruning and quantization. Besides proving the functionality of our approach we were able to compress a ResNet18 for CIFAR-10, on an embedded ARM processor, to 20% of the original inference latency without significant loss of accuracy. Moreover, we can demonstrate that a joint search and compression using pruning and quantization is superior to an individual search for policies using a single compression method.
translated by 谷歌翻译
当客观报告代替主观写作时,诸如百科全书和新闻文章的参考文本可以表现出偏见的语言。现有方法检测偏差主要依赖于带注释的数据来训练机器学习模型。但是,低注释员协议和可比性是可用媒体偏见Corpora的实质性缺点。为了评估数据收集选项,我们收集和比较从两个流行的众包平台获得的标签。我们的结果展示了现有的众包缺乏数据质量,强调了培训的专家框架的需要收集更可靠的数据集。通过创建此类框架并收集第一个数据集,我们能够将Krippendorff的$ \ Alpha $ = 0.144(众群标签)提升为$ \ Alpha $ = 0.419(专家标签)。我们得出结论,详细的注释培训提高了数据质量,提高了现有偏置检测系统的性能。我们将来继续扩展我们的数据集。
translated by 谷歌翻译