Deep learning models, though having achieved great success in many different fields over the past years, are usually data hungry, fail to perform well on unseen samples, and lack of interpretability. Various prior knowledge often exists in the target domain and their use can alleviate the deficiencies with deep learning. To better mimic the behavior of human brains, different advanced methods have been proposed to identify domain knowledge and integrate it into deep models for data-efficient, generalizable, and interpretable deep learning, which we refer to as knowledge-augmented deep learning (KADL). In this survey, we define the concept of KADL, and introduce its three major tasks, i.e., knowledge identification, knowledge representation, and knowledge integration. Different from existing surveys that are focused on a specific type of knowledge, we provide a broad and complete taxonomy of domain knowledge and its representations. Based on our taxonomy, we provide a systematic review of existing techniques, different from existing works that survey integration approaches agnostic to taxonomy of knowledge. This survey subsumes existing works and offers a bird's-eye view of research in the general area of knowledge-augmented deep learning. The thorough and critical reviews of numerous papers help not only understand current progresses but also identify future directions for the research on knowledge-augmented deep learning.
translated by 谷歌翻译
本文介绍了Kings Arena的荣誉,Kings Arena是基于国王荣誉的强化学习(RL)环境,这是世界上最受欢迎的游戏之一。与以前大多数工作中研究的其他环境相比,我们的人对竞争性强化学习提出了新的概括挑战。与对手竞争的一个代理商是一个多代理的问题;它需要概括能力,因为它具有控制和不同的对手竞争的不同目标。我们描述了国王域名荣誉的观察,动作和奖励规范,并提供了一个基于python的开源界面,以与游戏引擎进行通信。我们为纪念国王竞技场的二十个目标英雄提供了各种任务,并为具有可行的计算资源的基于RL的方法提供了初始基线结果。最后,我们展示了国王竞技场的荣誉和对挑战的可能补救措施所面临的概括挑战。所有软件(包括环境级)均可在https://github.com/tencent-ailab/hok_env上公开获得。该文档可在https://aiarena.tencent.com/hok/doc/上获得。
translated by 谷歌翻译
由于异质访问点(APS)的性质,负载平衡(LB)是混合灯保真度(LIFI)和无线保真度(WIFI)网络(HLWNETS)的挑战性问题。机器学习有可能以近乎最佳的网络性能为培训过程提供复杂性的LB解决方案。但是,当网络环境(尤其是用户数量)更改时,需要进行最先进的(SOTA)学习辅助LB方法,这大大限制了其实用性。在本文中,提出了一个新颖的深神经网络(DNN)结构,称为自适应目标条件神经网络(A-TCNN),该结构在其他用户的条件下为一个目标用户进行AP选择。此外,开发了一种自适应机制,可以通过分配数据速率要求将较大数量的用户映射到较大的数字,而不会影响目标用户的AP选择结果。这使提出的方法可以处理不同数量的用户,而无需再进行重新培训。结果表明,A-TCNN实现了非常接近测试数据集的网络吞吐量,差距小于3%。还证明,A-TCNN可以获得与两个SOTA基准相当的网络吞吐量,同时最多将运行时降低了三个数量级。
translated by 谷歌翻译
因果发现是学习给定观察数据的变量之间的因果关系,对于许多应用程序很重要。现有的因果发现方法假设数据足够,在许多现实世界数据集中可能并非如此。结果,在有限的数据下,许多现有的因果发现方法可能会失败。在这项工作中,我们提出了贝叶斯的频繁独立性测试,以在数据不足下提高基于约束的因果发现方法的性能:1)我们首先引入了一种贝叶斯方法来估计互信息(MI),我们提出了一个可靠的方法基于MI的独立测试; 2)其次,我们考虑了假设可能性的贝叶斯估计,并将其纳入定义明确的统计检验中,从而进行了基于统计测试的强大独立性检验。我们将提出的独立测试应用于基于约束的因果发现方法,并评估样品不足的基准数据集上的性能。实验在SOTA方法的准确性和效率方面表现出显着的性能提高。
translated by 谷歌翻译
General Continual Learning (GCL) aims at learning from non independent and identically distributed stream data without catastrophic forgetting of the old tasks that don't rely on task boundaries during both training and testing stages. We reveal that the relation and feature deviations are crucial problems for catastrophic forgetting, in which relation deviation refers to the deficiency of the relationship among all classes in knowledge distillation, and feature deviation refers to indiscriminative feature representations. To this end, we propose a Complementary Calibration (CoCa) framework by mining the complementary model's outputs and features to alleviate the two deviations in the process of GCL. Specifically, we propose a new collaborative distillation approach for addressing the relation deviation. It distills model's outputs by utilizing ensemble dark knowledge of new model's outputs and reserved outputs, which maintains the performance of old tasks as well as balancing the relationship among all classes. Furthermore, we explore a collaborative self-supervision idea to leverage pretext tasks and supervised contrastive learning for addressing the feature deviation problem by learning complete and discriminative features for all classes. Extensive experiments on four popular datasets show that our CoCa framework achieves superior performance against state-of-the-art methods. Code is available at https://github.com/lijincm/CoCa.
translated by 谷歌翻译
眼目光分析是计算机视觉和人类计算机相互作用领域的重要研究问题。即使在过去十年中取得了显着进展,由于眼睛外观,眼头相互作用,遮挡,图像质量和照明条件的独特性,自动凝视分析仍然具有挑战性。有几个开放的问题,包括在没有先验知识的情况下,在不受限制的环境中解释凝视方向的重要提示以及如何实时编码它们。我们回顾了一系列目光分析任务和应用程序的进展,以阐明这些基本问题,确定凝视分析中的有效方法并提供可能的未来方向。我们根据其优势和报告的评估指标分析了最近的凝视估计和分割方法,尤其是在无监督和弱监督的领域中。我们的分析表明,强大而通用的凝视分析方法的开发仍然需要解决现实世界中的挑战,例如不受限制的设置和学习,并减少了监督。最后,我们讨论了设计现实的目光分析系统的未来研究方向,该系统可以传播到其他领域,包括计算机视觉,增强现实(AR),虚拟现实(VR)和人类计算机交互(HCI)。项目页面:https://github.com/i-am-shreya/eyegazesurvey} {https://github.com/i-am-shreya/eyegazesurvey
translated by 谷歌翻译
Optical coherence tomography angiography (OCTA) is a novel imaging modality that has been widely utilized in ophthalmology and neuroscience studies to observe retinal vessels and microvascular systems. However, publicly available OCTA datasets remain scarce. In this paper, we introduce the largest and most comprehensive OCTA dataset dubbed OCTA-500, which contains OCTA imaging under two fields of view (FOVs) from 500 subjects. The dataset provides rich images and annotations including two modalities (OCT/OCTA volumes), six types of projections, four types of text labels (age / gender / eye / disease) and seven types of segmentation labels (large vessel/capillary/artery/vein/2D FAZ/3D FAZ/retinal layers). Then, we propose a multi-object segmentation task called CAVF, which integrates capillary segmentation, artery segmentation, vein segmentation, and FAZ segmentation under a unified framework. In addition, we optimize the 3D-to-2D image projection network (IPN) to IPN-V2 to serve as one of the segmentation baselines. Experimental results demonstrate that IPN-V2 achieves an ~10% mIoU improvement over IPN on CAVF task. Finally, we further study the impact of several dataset characteristics: the training set size, the model input (OCT/OCTA, 3D volume/2D projection), the baseline networks, and the diseases. The dataset and code are publicly available at: https://ieee-dataport.org/open-access/octa-500.
translated by 谷歌翻译
Cooperative multi-agent reinforcement learning (c-MARL) is widely applied in safety-critical scenarios, thus the analysis of robustness for c-MARL models is profoundly important. However, robustness certification for c-MARLs has not yet been explored in the community. In this paper, we propose a novel certification method, which is the first work to leverage a scalable approach for c-MARLs to determine actions with guaranteed certified bounds. c-MARL certification poses two key challenges compared with single-agent systems: (i) the accumulated uncertainty as the number of agents increases; (ii) the potential lack of impact when changing the action of a single agent into a global team reward. These challenges prevent us from directly using existing algorithms. Hence, we employ the false discovery rate (FDR) controlling procedure considering the importance of each agent to certify per-state robustness and propose a tree-search-based algorithm to find a lower bound of the global reward under the minimal certified perturbation. As our method is general, it can also be applied in single-agent environments. We empirically show that our certification bounds are much tighter than state-of-the-art RL certification solutions. We also run experiments on two popular c-MARL algorithms: QMIX and VDN, in two different environments, with two and four agents. The experimental results show that our method produces meaningful guaranteed robustness for all models and environments. Our tool CertifyCMARL is available at https://github.com/TrustAI/CertifyCMA
translated by 谷歌翻译
Modern autonomous driving system is characterized as modular tasks in sequential order, i.e., perception, prediction and planning. As sensors and hardware get improved, there is trending popularity to devise a system that can perform a wide diversity of tasks to fulfill higher-level intelligence. Contemporary approaches resort to either deploying standalone models for individual tasks, or designing a multi-task paradigm with separate heads. These might suffer from accumulative error or negative transfer effect. Instead, we argue that a favorable algorithm framework should be devised and optimized in pursuit of the ultimate goal, i.e. planning of the self-driving-car. Oriented at this goal, we revisit the key components within perception and prediction. We analyze each module and prioritize the tasks hierarchically, such that all these tasks contribute to planning (the goal). To this end, we introduce Unified Autonomous Driving (UniAD), the first comprehensive framework up-to-date that incorporates full-stack driving tasks in one network. It is exquisitely devised to leverage advantages of each module, and provide complementary feature abstractions for agent interaction from a global perspective. Tasks are communicated with unified query design to facilitate each other toward planning. We instantiate UniAD on the challenging nuScenes benchmark. With extensive ablations, the effectiveness of using such a philosophy is proven to surpass previous state-of-the-arts by a large margin in all aspects. The full suite of codebase and models would be available to facilitate future research in the community.
translated by 谷歌翻译
Pre-trained language models for programming languages have shown a powerful ability on processing many Software Engineering (SE) tasks, e.g., program synthesis, code completion, and code search. However, it remains to be seen what is behind their success. Recent studies have examined how pre-trained models can effectively learn syntax information based on Abstract Syntax Trees. In this paper, we figure out what role the self-attention mechanism plays in understanding code syntax and semantics based on AST and static analysis. We focus on a well-known representative code model, CodeBERT, and study how it can learn code syntax and semantics by the self-attention mechanism and Masked Language Modelling (MLM) at the token level. We propose a group of probing tasks to analyze CodeBERT. Based on AST and static analysis, we establish the relationships among the code tokens. First, Our results show that CodeBERT can acquire syntax and semantics knowledge through self-attention and MLM. Second, we demonstrate that the self-attention mechanism pays more attention to dependence-relationship tokens than to other tokens. Different attention heads play different roles in learning code semantics; we show that some of them are weak at encoding code semantics. Different layers have different competencies to represent different code properties. Deep CodeBERT layers can encode the semantic information that requires some complex inference in the code context. More importantly, we show that our analysis is helpful and leverage our conclusions to improve CodeBERT. We show an alternative approach for pre-training models, which makes fully use of the current pre-training strategy, i.e, MLM, to learn code syntax and semantics, instead of combining features from different code data formats, e.g., data-flow, running-time states, and program outputs.
translated by 谷歌翻译