Graph serves as a powerful tool for modeling data that has an underlying structure in non-Euclidean space, by encoding relations as edges and entities as nodes. Despite developments in learning from graph-structured data over the years, one obstacle persists: graph imbalance. Although several attempts have been made to target this problem, they are limited to considering only class-level imbalance. In this work, we argue that for graphs, the imbalance is likely to exist at the sub-class topology group level. Due to the flexibility of topology structures, graphs could be highly diverse, and learning a generalizable classification boundary would be difficult. Therefore, several majority topology groups may dominate the learning process, rendering others under-represented. To address this problem, we propose a new framework {\method} and design (1 a topology extractor, which automatically identifies the topology group for each instance with explicit memory cells, (2 a training modulator, which modulates the learning process of the target GNN model to prevent the case of topology-group-wise under-representation. {\method} can be used as a key component in GNN models to improve their performances under the data imbalance setting. Analyses on both topology-level imbalance and the proposed {\method} are provided theoretically, and we empirically verify its effectiveness with both node-level and graph-level classification as the target tasks.
translated by 谷歌翻译
Prompt learning is one of the most effective and trending ways to adapt powerful vision-language foundation models like CLIP to downstream datasets by tuning learnable prompt vectors with very few samples. However, although prompt learning achieves excellent performance over in-domain data, it still faces the major challenge of generalizing to unseen classes and domains. Some existing prompt learning methods tackle this issue by adaptively generating different prompts for different tokens or domains but neglecting the ability of learned prompts to generalize to unseen domains. In this paper, we propose a novel prompt learning paradigm that directly generates domain invariant prompt generalizable to unseen domains, called MetaPrompt. Specifically, a dual-modality prompt tuning network is proposed to generate prompts for inputs from both image and text modalities. More importantly, we propose a meta-learning-based prompt tuning algorithm that explicitly constrains the prompt tuned on a specific domain or class also to achieve good performance on another domain or class. Extensive experiments on 11 datasets for base-to-new generalization and four datasets for domain generalization demonstrate that our method consistently and significantly outperforms existing methods.
translated by 谷歌翻译
Many Click-Through Rate (CTR) prediction works focused on designing advanced architectures to model complex feature interactions but neglected the importance of feature representation learning, e.g., adopting a plain embedding layer for each feature, which results in sub-optimal feature representations and thus inferior CTR prediction performance. For instance, low frequency features, which account for the majority of features in many CTR tasks, are less considered in standard supervised learning settings, leading to sub-optimal feature representations. In this paper, we introduce self-supervised learning to produce high-quality feature representations directly and propose a model-agnostic Contrastive Learning for CTR (CL4CTR) framework consisting of three self-supervised learning signals to regularize the feature representation learning: contrastive loss, feature alignment, and field uniformity. The contrastive module first constructs positive feature pairs by data augmentation and then minimizes the distance between the representations of each positive feature pair by the contrastive loss. The feature alignment constraint forces the representations of features from the same field to be close, and the field uniformity constraint forces the representations of features from different fields to be distant. Extensive experiments verify that CL4CTR achieves the best performance on four datasets and has excellent effectiveness and compatibility with various representative baselines.
translated by 谷歌翻译
Personalized Federated Learning (PFL) which collaboratively trains a federated model while considering local clients under privacy constraints has attracted much attention. Despite its popularity, it has been observed that existing PFL approaches result in sub-optimal solutions when the joint distribution among local clients diverges. To address this issue, we present Federated Modular Network (FedMN), a novel PFL approach that adaptively selects sub-modules from a module pool to assemble heterogeneous neural architectures for different clients. FedMN adopts a light-weighted routing hypernetwork to model the joint distribution on each client and produce the personalized selection of the module blocks for each client. To reduce the communication burden in existing FL, we develop an efficient way to interact between the clients and the server. We conduct extensive experiments on the real-world test beds and the results show both the effectiveness and efficiency of the proposed FedMN over the baselines.
translated by 谷歌翻译
Dynamic interaction graphs have been widely adopted to model the evolution of user-item interactions over time. There are two crucial factors when modelling user preferences for link prediction in dynamic interaction graphs: 1) collaborative relationship among users and 2) user personalized interaction patterns. Existing methods often implicitly consider these two factors together, which may lead to noisy user modelling when the two factors diverge. In addition, they usually require time-consuming parameter learning with back-propagation, which is prohibitive for real-time user preference modelling. To this end, this paper proposes FreeGEM, a parameter-free dynamic graph embedding method for link prediction. Firstly, to take advantage of the collaborative relationships, we propose an incremental graph embedding engine to obtain user/item embeddings, which is an Online-Monitor-Offline architecture consisting of an Online module to approximately embed users/items over time, a Monitor module to estimate the approximation error in real time and an Offline module to calibrate the user/item embeddings when the online approximation errors exceed a threshold. Meanwhile, we integrate attribute information into the model, which enables FreeGEM to better model users belonging to some under represented groups. Secondly, we design a personalized dynamic interaction pattern modeller, which combines dynamic time decay with attention mechanism to model user short-term interests. Experimental results on two link prediction tasks show that FreeGEM can outperform the state-of-the-art methods in accuracy while achieving over 36X improvement in efficiency. All code and datasets can be found in https://github.com/FudanCISL/FreeGEM.
translated by 谷歌翻译
为了构建推荐系统,不仅考虑用户 - 项目交互表示为序数变量,而且还利用了描述用户之间关系的社交网络,我们开发了一个层次结构的贝叶斯模型称为序数图因子分析(OGFA),该模型共同对用户建模 - 项目和用户 - 用户交互。 OGFA不仅可以实现良好的建议性能,而且还提取与代表性用户偏好相对应的可解释潜在因素。我们进一步将OGFA扩展到Oldinal Graph Gamma信念网络,该网络是一个多策略层的深层概率模型,可在多个语义级别捕获用户的偏好和社交社区。为了有效的推断,我们开发了一种并行的混合吉布斯 - EM算法,该算法利用了图的稀疏性,可扩展到大数据集。我们的实验结果表明,所提出的模型不仅在具有明确或隐式反馈的推荐数据集上的最新基准,而且还提供了可解释的潜在表示。
translated by 谷歌翻译
深度学习的快速发展在分割方面取得了长足的进步,这是计算机视觉的基本任务之一。但是,当前的细分算法主要取决于像素级注释的可用性,这些注释通常昂贵,乏味且费力。为了减轻这一负担,过去几年见证了越来越多的关注,以建立标签高效,深度学习的细分算法。本文对标签有效的细分方法进行了全面的审查。为此,我们首先根据不同类型的弱标签提供的监督(包括没有监督,粗略监督,不完整的监督和嘈杂的监督和嘈杂的监督),首先开发出一种分类法来组织这些方法,并通过细分类型(包括语义细分)补充,实例分割和全景分割)。接下来,我们从统一的角度总结了现有的标签有效的细分方法,该方法讨论了一个重要的问题:如何弥合弱监督和密集预测之间的差距 - 当前的方法主要基于启发式先导,例如交叉像素相似性,跨标签约束,跨视图一致性,跨图像关系等。最后,我们分享了对标签有效深层细分的未来研究方向的看法。
translated by 谷歌翻译
通常使用自回归生成模型,尤其是对于涉及顺序数据的那些任务。然而,由于链式有条件建模的内在特征(例如,暴露偏见或缺乏远距离连贯性),由于许多固有的缺陷而困扰着它们,严重限制了它们正确模型分布的能力。在本文中,我们提出了一种独特的方法,该方法称为训练自回旋生成模型,以利用精心设计的基于能量的学习目标。通过利用SoftMax操作的额外自由度,我们被允许使自回归模型本身成为基于能量的模型,用于衡量输入的可能性,而无需引入任何额外的参数。此外,我们表明可以有效地训练电子臂,并能够减轻暴露偏置问题并增加自回归生成模型的时间连贯性。广泛的经验结果涵盖了语言建模,神经机器翻译和图像产生等基准,证明了拟议方法的有效性。
translated by 谷歌翻译
矢量图形(VG)在我们的日常生活中无处不在,在工程,建筑,设计等方面进行了广泛的应用。大多数现有方法的VG识别过程是首先将VG渲染为栅格图形(RG),然后基于行为识别。 RG格式。但是,此过程丢弃了几何结构并失去了VG的高分辨率。最近,提出了另一种类别的算法以直接从原始VG格式识别。但是它受RG渲染可以滤除的拓扑错误的影响。它不是查看一种格式,而是将VG和RG格式一起使用以避免这些缺点的好解决方案。此外,我们认为VG-TO-RG渲染过程对于有效组合VG和RG信息至关重要。通过指定有关如何将VG原语转移到RG像素的规则,渲染过程描述了VG和RG之间的相互作用和相关性。结果,我们提出了Rendnet,这是在2D和3D方案上识别的统一体系结构,该体系结构考虑VG/RG表示并通过结合VG-TO-RG栅格化过程来利用其相互作用。实验表明,Rendnet可以在各种VG数据集上的2D和3D对象识别任务上实现最新性能。
translated by 谷歌翻译
离线增强学习(RL)旨在从先前收集的静态轨迹数据中学习政策,而无需与真实环境进行交互。最近的作品通过将离线RL视为一个通用序列生成问题,从而提供了一种新的视角,该序列模型(例如变压器体系结构)可以通过轨迹模型进行模型,并将光束搜索重新用于计划算法。但是,在一般离线RL任务中使用的培训数据集非常有限,并且通常遭受分配覆盖率不足,这可能对训练序列的生成模型有害,但在先前的工作中没有引起足够的关注。在本文中,我们提出了一种名为Boottrapped Transformer的新型算法,该算法结合了自举的想法,并利用了学习的模型以自我生成更多的离线数据,以进一步增强序列模型训练。我们对两个离线RL基准测试进行了广泛的实验,并证明我们的模型可以在很大程度上纠正现有的离线RL训练限制并击败其他强大的基线方法。我们还分析了生成的伪数据,显示的特征可能会揭示离线RL训练。这些代码可在https://seqml.github.io/bootorl上找到。
translated by 谷歌翻译