Document-level relation extraction faces two overlooked challenges: long-tail problem and multi-label problem. Previous work focuses mainly on obtaining better contextual representations for entity pairs, hardly address the above challenges. In this paper, we analyze the co-occurrence correlation of relations, and introduce it into DocRE task for the first time. We argue that the correlations can not only transfer knowledge between data-rich relations and data-scarce ones to assist in the training of tailed relations, but also reflect semantic distance guiding the classifier to identify semantically close relations for multi-label entity pairs. Specifically, we use relation embedding as a medium, and propose two co-occurrence prediction sub-tasks from both coarse- and fine-grained perspectives to capture relation correlations. Finally, the learned correlation-aware embeddings are used to guide the extraction of relational facts. Substantial experiments on two popular DocRE datasets are conducted, and our method achieves superior results compared to baselines. Insightful analysis also demonstrates the potential of relation correlations to address the above challenges.
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
A common scenario of Multilingual Neural Machine Translation (MNMT) is that each translation task arrives in a sequential manner, and the training data of previous tasks is unavailable. In this scenario, the current methods suffer heavily from catastrophic forgetting (CF). To alleviate the CF, we investigate knowledge distillation based life-long learning methods. Specifically, in one-tomany scenario, we propose a multilingual distillation method to make the new model (student) jointly learn multilingual output from old model (teacher) and new task. In many-to one scenario, we find that direct distillation faces the extreme partial distillation problem, and we propose two different methods to address it: pseudo input distillation and reverse teacher distillation. The experimental results on twelve translation tasks show that the proposed methods can better consolidate the previous knowledge and sharply alleviate the CF.
translated by 谷歌翻译
Face super-resolution is a domain-specific image super-resolution, which aims to generate High-Resolution (HR) face images from their Low-Resolution (LR) counterparts. In this paper, we propose a novel face super-resolution method, namely Semantic Encoder guided Generative Adversarial Face Ultra-Resolution Network (SEGA-FURN) to ultra-resolve an unaligned tiny LR face image to its HR counterpart with multiple ultra-upscaling factors (e.g., 4x and 8x). The proposed network is composed of a novel semantic encoder that has the ability to capture the embedded semantics to guide adversarial learning and a novel generator that uses a hierarchical architecture named Residual in Internal Dense Block (RIDB). Moreover, we propose a joint discriminator which discriminates both image data and embedded semantics. The joint discriminator learns the joint probability distribution of the image space and latent space. We also use a Relativistic average Least Squares loss (RaLS) as the adversarial loss to alleviate the gradient vanishing problem and enhance the stability of the training procedure. Extensive experiments on large face datasets have proved that the proposed method can achieve superior super-resolution results and significantly outperform other state-of-the-art methods in both qualitative and quantitative comparisons.
translated by 谷歌翻译
Code pre-trained models (CodePTMs) have recently demonstrated significant success in code intelligence. To interpret these models, some probing methods have been applied. However, these methods fail to consider the inherent characteristics of codes. In this paper, to address the problem, we propose a novel probing method CAT-probing to quantitatively interpret how CodePTMs attend code structure. We first denoise the input code sequences based on the token types pre-defined by the compilers to filter those tokens whose attention scores are too small. After that, we define a new metric CAT-score to measure the commonality between the token-level attention scores generated in CodePTMs and the pair-wise distances between corresponding AST nodes. The higher the CAT-score, the stronger the ability of CodePTMs to capture code structure. We conduct extensive experiments to integrate CAT-probing with representative CodePTMs for different programming languages. Experimental results show the effectiveness of CAT-probing in CodePTM interpretation. Our codes and data are publicly available at https://github.com/nchen909/CodeAttention.
translated by 谷歌翻译
随着计算能力的兴起,使用数据驱动的方法来共同设计机器人的形态和控制器已成为一种可行的方法。然而,评估每个形态下控制器的适应性是耗时的。作为开创性数据驱动的方法,共同适应利用了双NETWORK机制,目的是学习以形态学参数为条件的Q功能,以取代对各种候选者的传统评估,从而加快优化的速度。在本文中,我们发现共同适应在参数传输期间训练和状态行动分布变化期间的勘探误差的存在,这损害了性能。我们提出了在线和离线RL方法的并发网络的框架。通过灵活地利用行为克隆术语,我们可以减轻上述问题对结果的影响。进行仿真和物理实验以证明我们所提出的方法优于基线算法,这说明了所提出的方法是发现形态和控制器的最佳组合的有效方法。
translated by 谷歌翻译
视觉表示学习是解决各种视力问题的关键。依靠开创性的网格结构先验,卷积神经网络(CNN)已成为大多数深视觉模型的事实上的标准架构。例如,经典的语义分割方法通常采用带有编码器编码器体系结构的完全横向卷积网络(FCN)。编码器逐渐减少了空间分辨率,并通过更大的接受场来学习更多抽象的视觉概念。由于上下文建模对于分割至关重要,因此最新的努力一直集中在通过扩张(即极度)卷积或插入注意力模块来增加接受场。但是,基于FCN的体系结构保持不变。在本文中,我们旨在通过将视觉表示学习作为序列到序列预测任务来提供替代观点。具体而言,我们部署纯变压器以将图像编码为一系列贴片,而无需局部卷积和分辨率减少。通过在变压器的每一层中建立的全球环境,可以学习更强大的视觉表示形式,以更好地解决视力任务。特别是,我们的细分模型(称为分割变压器(SETR))在ADE20K上擅长(50.28%MIOU,这是提交当天测试排行榜中的第一个位置),Pascal环境(55.83%MIOU),并在CityScapes上达到竞争成果。此外,我们制定了一个分层局部全球(HLG)变压器的家族,其特征是窗户内的本地关注和跨窗户的全球性专注于层次结构和金字塔架构。广泛的实验表明,我们的方法在各种视觉识别任务(例如,图像分类,对象检测和实例分割和语义分割)上实现了吸引力的性能。
translated by 谷歌翻译
视频实例细分(VIS)旨在在视频序列中对对象实例进行分类,分割和跟踪。最近基于变压器的神经网络证明了它们为VIS任务建模时空相关性的强大能力。依靠视频或剪辑级输入,它们的潜伏期和计算成本很高。我们提出了一个强大的上下文融合网络来以在线方式解决VIS,该网络可以预测实例通过前几个框架进行逐帧的细分框架。为了有效地获取每个帧的精确和时间一致的预测,关键思想是将有效和紧凑的上下文从参考框架融合到目标框架中。考虑到参考和目标框架对目标预测的不同影响,我们首先通过重要性感知的压缩总结上下文特征。采用变压器编码器来融合压缩上下文。然后,我们利用嵌入订单的实例来传达身份感知信息,并将身份与预测的实例掩码相对应。我们证明,我们强大的融合网络在现有的在线VIS方法中取得了最佳性能,并且比以前在YouTube-VIS 2019和2021基准上发布的剪辑级方法更好。此外,视觉对象通常具有声学签名,这些签名自然与它们在录音录像中自然同步。通过利用我们的上下文融合网络在多模式数据上的灵活性,我们进一步研究了音频对视频密集预测任务的影响,这在现有作品中从未讨论过。我们建立了一个视听实例分割数据集,并证明野外场景中的声学信号可以使VIS任务受益。
translated by 谷歌翻译
最近,Vision Transformer模型已成为一系列视觉任务的重要模型。但是,这些模型通常是不透明的,特征可解释性较弱。此外,目前尚无针对本质上可解释的变压器构建的方法,该方法能够解释其推理过程并提供忠实的解释。为了缩小这些关键差距,我们提出了一种新型视觉变压器,称为“可解释的视觉变压器”(Ex-Vit),这是一种本质上可解释的变压器模型,能够共同发现可鲁棒的可解释特征并执行预测。具体而言,前vit由可解释的多头注意(E-MHA)模块,属性引导的解释器(ATTE)模块和自我监督属性引导的损失组成。 E-MHA裁缝可以解释的注意力重量,能够从本地贴片中学习具有噪音稳健性的模型决策的语义解释表示。同时,提议通过不同的属性发现来编码目标对象的歧视性属性特征,该发现构成了模型预测的忠实证据。此外,为我们的前武器开发了自我监督的属性引导损失,该损失旨在通过属性可区分性机制和属性多样性机制来学习增强表示形式,以定位多样性和歧视性属性并产生更健壮的解释。结果,我们可以通过拟议的前武器发现具有多种属性的忠实和强大的解释。
translated by 谷歌翻译
引用视频对象分割(R-VOS)旨在分割视频中的对象掩码,并给出将语言表达式转介到对象的情况下。这是最近引入的任务,吸引了不断增长的研究关注。但是,所有现有的作品都有很大的假设:表达式所描绘的对象必须存在于视频中,即表达式和视频必须具有对象级的语义共识。在现实世界中,通常会违反这种表达式的虚假视频,并且由于滥用假设,现有方法总是在此类错误查询中失败。在这项工作中,我们强调研究语义共识对于提高R-VOS的鲁棒性是必要的。因此,我们从没有语义共识假设的R-VOS构成了一个扩展任务,称为Robust R-VOS($ \ Mathrm {R}^2 $ -VOS)。 $ \ mathrm {r}^2 $ - VOS任务与主R-VOS任务的联合建模及其双重问题(文本重建)基本相关。我们接受这样的观察,即嵌入空间通过文本视频文本转换的周期具有关系一致性,该转换将主要问题和双重问题连接起来。我们利用周期一致性来区分语义共识,从而推进主要任务。通过引入早期接地介质,可以实现对主要问题和双重问题的平行优化。收集了一个新的评估数据集,$ \ mathrm {r}^2 $ -Youtube-vos,以测量R-VOS模型针对未配对的视频和表达式的稳健性。广泛的实验表明,我们的方法不仅可以识别出无关表达式和视频的负面对,而且还提高了具有出色歧义能力的正对的分割精度。我们的模型在Ref-Davis17,Ref-Youtube-Vos和Novel $ \ Mathrm {r}^2 $ -Youtube-vos数据集上实现了最先进的性能。
translated by 谷歌翻译