我们研究了将人类设计师创建的基于图像的,逐步组装手册转换为机器可解剖说明的问题。我们将此问题提出为顺序预测任务:在每个步骤中,我们的模型都读取手册,将要添加到当前形状中的组件定位,并注入其3D姿势。此任务构成了在手动图像和实际3D对象之间建立2D-3D对应关系的挑战,以及对看不见的3D对象的3D姿势估计,因为要在步骤中添加的新组件可以是从前一个步骤中构建的对象。为了应对这两个挑战,我们提出了一个基于学习的新型框架,即手动到执行计划网络(MEPNET),该网络(MEPNET)从一系列手动图像中重建了组装步骤。关键思想是将神经2D关键点检测模块和2D-3D投影算法进行高精度预测和强有力的概括为看不见的组件。 MEPNET在三个新收集的乐高手册数据集和Minecraft House数据集上优于现有方法。
translated by 谷歌翻译
Nowadays, fake news easily propagates through online social networks and becomes a grand threat to individuals and society. Assessing the authenticity of news is challenging due to its elaborately fabricated contents, making it difficult to obtain large-scale annotations for fake news data. Due to such data scarcity issues, detecting fake news tends to fail and overfit in the supervised setting. Recently, graph neural networks (GNNs) have been adopted to leverage the richer relational information among both labeled and unlabeled instances. Despite their promising results, they are inherently focused on pairwise relations between news, which can limit the expressive power for capturing fake news that spreads in a group-level. For example, detecting fake news can be more effective when we better understand relations between news pieces shared among susceptible users. To address those issues, we propose to leverage a hypergraph to represent group-wise interaction among news, while focusing on important news relations with its dual-level attention mechanism. Experiments based on two benchmark datasets show that our approach yields remarkable performance and maintains the high performance even with a small subset of labeled news data.
translated by 谷歌翻译
Machine learning algorithms typically assume that the training and test samples come from the same distributions, i.e., in-distribution. However, in open-world scenarios, streaming big data can be Out-Of-Distribution (OOD), rendering these algorithms ineffective. Prior solutions to the OOD challenge seek to identify invariant features across different training domains. The underlying assumption is that these invariant features should also work reasonably well in the unlabeled target domain. By contrast, this work is interested in the domain-specific features that include both invariant features and features unique to the target domain. We propose a simple yet effective approach that relies on correlations in general regardless of whether the features are invariant or not. Our approach uses the most confidently predicted samples identified by an OOD base model (teacher model) to train a new model (student model) that effectively adapts to the target domain. Empirical evaluations on benchmark datasets show that the performance is improved over the SOTA by ~10-20%
translated by 谷歌翻译
Counterfactual explanations promote explainability in machine learning models by answering the question "how should an input instance be perturbed to obtain a desired predicted label?". The comparison of this instance before and after perturbation can enhance human interpretation. Most existing studies on counterfactual explanations are limited in tabular data or image data. In this work, we study the problem of counterfactual explanation generation on graphs. A few studies have explored counterfactual explanations on graphs, but many challenges of this problem are still not well-addressed: 1) optimizing in the discrete and disorganized space of graphs; 2) generalizing on unseen graphs; and 3) maintaining the causality in the generated counterfactuals without prior knowledge of the causal model. To tackle these challenges, we propose a novel framework CLEAR which aims to generate counterfactual explanations on graphs for graph-level prediction models. Specifically, CLEAR leverages a graph variational autoencoder based mechanism to facilitate its optimization and generalization, and promotes causality by leveraging an auxiliary variable to better identify the underlying causal model. Extensive experiments on both synthetic and real-world graphs validate the superiority of CLEAR over the state-of-the-art methods in different aspects.
translated by 谷歌翻译
图形存在于许多现实世界中的应用中,例如财务欺诈检测,商业建议和社交网络分析。但是,鉴于图形注释或标记的高成本,我们面临严重的图形标签 - 刻度问题,即,图可能具有一些标记的节点。这样一个问题的一个例子是所谓的\ textit {少数弹性节点分类}。该问题的主要方法均依靠\ textit {情节元学习}。在这项工作中,我们通过提出一个基本问题来挑战现状,元学习是否是对几个弹性节点分类任务的必要条件。我们在标准的几杆节点分类设置下提出了一个新的简单框架,作为学习有效图形编码器的元学习的替代方法。该框架由有监督的图形对比学习以及新颖的数据增强,子图编码和图形上的多尺度对比度组成。在三个基准数据集(Corafull,Reddit,OGBN)上进行的广泛实验表明,新框架显着胜过基于最先进的元学习方法。
translated by 谷歌翻译
公平机器学习旨在减轻模型预测的偏见,这对于关于诸如种族和性别等敏感属性的某些群体的偏见。在许多现有的公平概念中,反事实公平通过比较来自原始数据和反事实的预测来衡量因因果角度来源的模型公平。在反事实上,该个人的敏感属性值已被修改。最近,少数作品将反事实公平扩展到图数据,但大多数忽略了可能导致偏差的以下事实:1)每个节点邻居的敏感属性可能会影响预测w.r.t.这个节点; 2)敏感属性可能会导致其他特征和图形结构。为了解决这些问题,在本文中,我们提出了一种新颖的公平概念 - 图形反应性公平,这考虑了上述事实领导的偏差。要学习对图形反事实公平的节点表示,我们提出了一种基于反事实数据增强的新颖框架。在此框架中,我们生成对应于每个节点和邻居敏感属性的扰动的反应性。然后,我们通过最大限度地减少从原始图表中学到的表示与每个节点的反事实之间的差异来执行公平性。合成和真实图的实验表明,我们的框架优于图形反事实公平性的最先进的基线,并且还实现了可比的预测性能。
translated by 谷歌翻译
逐步学习新课程的能力对于所有现实世界的人工智能系统至关重要。像社交媒体,推荐系统,电子商务平台等的大部分高冲击应用都可以由图形模型表示。在本文中,我们调查了挑战但实际问题,图表几次拍摄的类增量(图形FCL)问题,其中图形模型是任务,以对新遇到的类和以前学习的类进行分类。为此目的,我们通过从基类循环地采样任务来提出图形伪增量学习范例,以便为我们的模型产生任意数量的培训集,以练习增量学习技能。此外,我们设计了一种基于分层的图形元学习框架,Hag-Meta。我们介绍了一个任务敏感的常规程序,从任务级关注和节点类原型计算,以缓解到新颖或基本类上的过度拟合。为了采用拓扑知识,我们添加了一个节点级注意模块来调整原型表示。我们的模型不仅达到了旧知识整合的更大稳定性,而且还可以获得对具有非常有限的数据样本的新知识的有利适应性。在三个现实世界数据集上进行广泛的实验,包括亚马逊服装,Reddit和DBLP,表明我们的框架与基线和其他相关最先进的方法相比,展示了显着的优势。
translated by 谷歌翻译
在线评论使消费者能够与公司聘用并提供重要的反馈。由于高维文本的复杂性,这些评论通常被简化为单一数值分数,例如评级或情绪评分。这项工作经验检查了用户生成的在线评论的因果效果对粒度水平:我们考虑多个方面,例如餐厅的食品和服务。了解消费者对不同方面的意见可以帮助详细评估业务绩效并有效地战略业务运营。具体来说,我们的目标是回答介入问题,例如餐厅人气将是什么,如果质量为本。它的方面服务增加了10%?对观测数据的因果推断的定义挑战是存在“混淆”,这可能不会被观察或测量,例如消费者对食品类型的偏好,使得估计效应偏差和高方差。为了解决这一挑战,我们求助于多模态代理,例如消费者简介信息和消费者和企业之间的互动。我们展示了如何有效利用丰富的信息来识别和估算在线评论中嵌入多个方面的因果效果。对综合和现实世界数据的实证评估证实了对拟议方法的可操作洞察力的功效和脱落。
translated by 谷歌翻译
因果推理中的一个重要问题是分解治疗结果对不同因果途径的总效果,并量化每种途径中的因果效果。例如,在因果公平中,作为男性雇员的总效果(即治疗)构成了对年收入(即,结果)的直接影响,并通过员工的职业(即调解人)和间接效应。因果调解分析(CMA)是一个正式的统计框架,用于揭示这种潜在的因果机制。 CMA在观察研究中的一个主要挑战正在处理混淆,导致治疗,调解员和结果之间导致虚假因果关系的变量。常规方法假设暗示可以测量所有混血器的顺序忽略性,这在实践中通常是不可核法的。这项工作旨在规避严格的顺序忽略性假设,并考虑隐藏的混杂。借鉴代理策略和深度学习的最新进展,我们建议同时揭示特征隐藏混杂物的潜在变量,并估计因果效应。使用合成和半合成数据集的经验评估验证了所提出的方法的有效性。我们进一步展示了我们对因果公平分析的方法的潜力。
translated by 谷歌翻译
In this paper, we propose a robust 3D detector, named Cross Modal Transformer (CMT), for end-to-end 3D multi-modal detection. Without explicit view transformation, CMT takes the image and point clouds tokens as inputs and directly outputs accurate 3D bounding boxes. The spatial alignment of multi-modal tokens is performed implicitly, by encoding the 3D points into multi-modal features. The core design of CMT is quite simple while its performance is impressive. CMT obtains 73.0% NDS on nuScenes benchmark. Moreover, CMT has a strong robustness even if the LiDAR is missing. Code will be released at https://github.com/junjie18/CMT.
translated by 谷歌翻译