Knowledge graph embedding (KGE), which maps entities and relations in a knowledge graph into continuous vector spaces, has achieved great success in predicting missing links in knowledge graphs. However, knowledge graphs often contain incomplete triples that are difficult to inductively infer by KGEs. To address this challenge, we resort to analogical inference and propose a novel and general self-supervised framework AnKGE to enhance KGE models with analogical inference capability. We propose an analogical object retriever that retrieves appropriate analogical objects from entity-level, relation-level, and triple-level. And in AnKGE, we train an analogy function for each level of analogical inference with the original element embedding from a well-trained KGE model as input, which outputs the analogical object embedding. In order to combine inductive inference capability from the original KGE model and analogical inference capability enhanced by AnKGE, we interpolate the analogy score with the base model score and introduce the adaptive weights in the score function for prediction. Through extensive experiments on FB15k-237 and WN18RR datasets, we show that AnKGE achieves competitive results on link prediction task and well performs analogical inference.
translated by 谷歌翻译
Designing better deep networks and better reinforcement learning (RL) algorithms are both important for deep RL. This work focuses on the former. Previous methods build the network with several modules like CNN, LSTM and Attention. Recent methods combine the Transformer with these modules for better performance. However, it requires tedious optimization skills to train a network composed of mixed modules, making these methods inconvenient to be used in practice. In this paper, we propose to design \emph{pure Transformer-based networks} for deep RL, aiming at providing off-the-shelf backbones for both the online and offline settings. Specifically, the Transformer in Transformer (TIT) backbone is proposed, which cascades two Transformers in a very natural way: the inner one is used to process a single observation, while the outer one is responsible for processing the observation history; combining both is expected to extract spatial-temporal representations for good decision-making. Experiments show that TIT can achieve satisfactory performance in different settings, consistently.
translated by 谷歌翻译
Natural Language Processing (NLP) has been revolutionized by the use of Pre-trained Language Models (PLMs) such as BERT. Despite setting new records in nearly every NLP task, PLMs still face a number of challenges including poor interpretability, weak reasoning capability, and the need for a lot of expensive annotated data when applied to downstream tasks. By integrating external knowledge into PLMs, \textit{\underline{K}nowledge-\underline{E}nhanced \underline{P}re-trained \underline{L}anguage \underline{M}odels} (KEPLMs) have the potential to overcome the above-mentioned limitations. In this paper, we examine KEPLMs systematically through a series of studies. Specifically, we outline the common types and different formats of knowledge to be integrated into KEPLMs, detail the existing methods for building and evaluating KEPLMS, present the applications of KEPLMs in downstream tasks, and discuss the future research directions. Researchers will benefit from this survey by gaining a quick and comprehensive overview of the latest developments in this field.
translated by 谷歌翻译
Contrastive deep graph clustering, which aims to divide nodes into disjoint groups via contrastive mechanisms, is a challenging research spot. Among the recent works, hard sample mining-based algorithms have achieved great attention for their promising performance. However, we find that the existing hard sample mining methods have two problems as follows. 1) In the hardness measurement, the important structural information is overlooked for similarity calculation, degrading the representativeness of the selected hard negative samples. 2) Previous works merely focus on the hard negative sample pairs while neglecting the hard positive sample pairs. Nevertheless, samples within the same cluster but with low similarity should also be carefully learned. To solve the problems, we propose a novel contrastive deep graph clustering method dubbed Hard Sample Aware Network (HSAN) by introducing a comprehensive similarity measure criterion and a general dynamic sample weighing strategy. Concretely, in our algorithm, the similarities between samples are calculated by considering both the attribute embeddings and the structure embeddings, better revealing sample relationships and assisting hardness measurement. Moreover, under the guidance of the carefully collected high-confidence clustering information, our proposed weight modulating function will first recognize the positive and negative samples and then dynamically up-weight the hard sample pairs while down-weighting the easy ones. In this way, our method can mine not only the hard negative samples but also the hard positive sample, thus improving the discriminative capability of the samples further. Extensive experiments and analyses demonstrate the superiority and effectiveness of our proposed method.
translated by 谷歌翻译
Proximal Policy Optimization (PPO) is a highly popular policy-based deep reinforcement learning (DRL) approach. However, we observe that the homogeneous exploration process in PPO could cause an unexpected stability issue in the training phase. To address this issue, we propose PPO-UE, a PPO variant equipped with self-adaptive uncertainty-aware explorations (UEs) based on a ratio uncertainty level. The proposed PPO-UE is designed to improve convergence speed and performance with an optimized ratio uncertainty level. Through extensive sensitivity analysis by varying the ratio uncertainty level, our proposed PPO-UE considerably outperforms the baseline PPO in Roboschool continuous control tasks.
translated by 谷歌翻译
In real teaching scenarios, an excellent teacher always teaches what he (or she) is good at but the student is not. This method gives the student the best assistance in making up for his (or her) weaknesses and becoming a good one overall. Enlightened by this, we introduce the approach to the knowledge distillation framework and propose a data-based distillation method named ``Teaching what you Should Teach (TST)''. To be specific, TST contains a neural network-based data augmentation module with the priori bias, which can assist in finding what the teacher is good at while the student are not by learning magnitudes and probabilities to generate suitable samples. By training the data augmentation module and the generalized distillation paradigm in turn, a student model that has excellent generalization ability can be created. To verify the effectiveness of TST, we conducted extensive comparative experiments on object recognition (CIFAR-100 and ImageNet-1k), detection (MS-COCO), and segmentation (Cityscapes) tasks. As experimentally demonstrated, TST achieves state-of-the-art performance on almost all teacher-student pairs. Furthermore, we conduct intriguing studies of TST, including how to solve the performance degradation caused by the stronger teacher and what magnitudes and probabilities are needed for the distillation framework.
translated by 谷歌翻译
Existing deep learning based HDRTV reconstruction methods assume one kind of tone mapping operators (TMOs) as the degradation procedure to synthesize SDRTV-HDRTV pairs for supervised training. In this paper, we argue that, although traditional TMOs exploit efficient dynamic range compression priors, they have several drawbacks on modeling the realistic degradation: information over-preservation, color bias and possible artifacts, making the trained reconstruction networks hard to generalize well to real-world cases. To solve this problem, we propose a learning-based data synthesis approach to learn the properties of real-world SDRTVs by integrating several tone mapping priors into both network structures and loss functions. In specific, we design a conditioned two-stream network with prior tone mapping results as a guidance to synthesize SDRTVs by both global and local transformations. To train the data synthesis network, we form a novel self-supervised content loss to constraint different aspects of the synthesized SDRTVs at regions with different brightness distributions and an adversarial loss to emphasize the details to be more realistic. To validate the effectiveness of our approach, we synthesize SDRTV-HDRTV pairs with our method and use them to train several HDRTV reconstruction networks. Then we collect two inference datasets containing both labeled and unlabeled real-world SDRTVs, respectively. Experimental results demonstrate that, the networks trained with our synthesized data generalize significantly better to these two real-world datasets than existing solutions.
translated by 谷歌翻译
据报道,深度学习系统可以在许多应用程序中实现最新的性能,关键是在基准数据集中存在训练有素的分类器。作为主流损失函数,交叉熵很容易导致我们找到表现出严重过度拟合行为的模型。在本文中,我们表明现有的交叉熵损失最小化问题基本上了解了数据集的基础数据分布的标签条件熵(CE)。但是,以这种方式学习的CE并不能很好地表征标签和输入共享的信息。在本文中,我们提出了一个共同的信息学习框架,在该框架中,我们通过学习标签和输入之间的相互信息来训练深层神经网络分类器。从理论上讲,我们在相互信息方面给出了人口分类误差的下限。此外,我们在$ \ mathbb {r}^n $中的混凝土二进制分类数据模型以及在这种情况下的错误概率下限中得出了相互信息的下限和上限。从经验上讲,我们在几个基准数据集上进行了广泛的实验,以支持我们的理论。相互学习的分类器(MILC)比有条件的熵学习分类器(CELC)取得更好的概括性能,其改进在测试准确性方面可能超过10 \%。
translated by 谷歌翻译
透明的物体广泛用于工业自动化和日常生活中。但是,强大的视觉识别和对透明物体的感知一直是一个主要挑战。目前,由于光的折射和反射,大多数商用级深度摄像机仍然不擅长感知透明物体的表面。在这项工作中,我们从单个RGB-D输入中提出了一种基于变压器的透明对象深度估计方法。我们观察到,变压器的全球特征使得更容易提取上下文信息以执行透明区域的深度估计。此外,为了更好地增强细粒度的特征,功能融合模块(FFM)旨在帮助连贯的预测。我们的经验证据表明,与以前的最新基于卷积的数据集相比,我们的模型在最近的流行数据集中有了重大改进,例如RMSE增长25%,RER增长21%。广泛的结果表明,我们的基于变压器的模型可以更好地汇总对象的RGB和不准确的深度信息,以获得更好的深度表示。我们的代码和预培训模型将在https://github.com/yuchendoudou/tode上找到。
translated by 谷歌翻译
高分辨率表示对于基于视觉的机器人抓问题很重要。现有作品通常通过子网络将输入图像编码为低分辨率表示形式,然后恢复高分辨率表示。这将丢失空间信息,当考虑多种类型的对象或远离摄像机时,解码器引入的错误将更加严重。为了解决这些问题,我们重新审视了CNN的设计范式,以实现机器人感知任务。我们证明,与串行堆叠的卷积层相反,使用平行分支将是机器人视觉抓握任务的更强大设计。特别是,为机器人感知任务(例如,高分辨率代表和轻量级设计)提供了神经网络设计的准则,这些指南应对不同操纵场景中的挑战做出回应。然后,我们开发了一种新颖的抓地视觉体系结构,称为HRG-NET,这是一种平行分支结构,始终保持高分辨率表示形式,并反复在分辨率上交换信息。广泛的实验验证了这两种设计可以有效地提高基于视觉的握把和加速网络训练的准确性。我们在YouTube上的真实物理环境中显示了一系列比较实验:https://youtu.be/jhlsp-xzhfy。
translated by 谷歌翻译