哪些目标标签对于图形神经网络(GNN)培训最有效?在某些应用GNNS Excel样药物设计或欺诈检测的应用中,标记新实例很昂贵。我们开发一个具有数据效率的主动采样框架,即ScatterSample,以在主动学习设置下训练GNN。 ScatterSample采用称为不同确定性的抽样模块,从样品空间的不同区域收集具有较大不确定性的实例以进行标记。为了确保所选节点的多样化,不同的确定性簇群簇较高的不确定性节点,​​并从每个群集中选择代表性节点。严格的理论分析表明,与标准的主动采样方法相比,我们的ScatterSample算法进一步支持了其优势,该方法旨在简单地简单地提高不确定性,而不是使样品多样化。特别是,我们表明ScatterSample能够在整个样品空间上有效地减少模型不确定性。我们在五个数据集上的实验表明,散点样本明显优于其他GNN主动学习基线,特别是它将采样成本降低了50%,同时达到了相同的测试准确性。
translated by 谷歌翻译
知识图形问题应答(kgqa)涉及使用自然语言查询从知识图(kg)中检索事实。 KG是由关系相关的实体组成的策划事实集。某些事实还包括形成时间kg(tkg)的时间信息。虽然许多自然问题涉及显式或隐含的时间限制,但TKGS上的问题应答(QA)是一个相对未开发的地区。现有解决方案主要是为简单的时间问题设计,可以通过单个TKG事实直接回答。本文提出了一种全面的嵌入式框架,用于回答TKGS的复杂问题。我们的方法被称为时间问题推理(TempoQR)利用TKG Embeddings将问题与其指的特定实体和时间范围进行地面。它通过使用三个专用模块增强与上下文,实体和时空信息的问题嵌入问题。第一个计算给定问题的文本表示,第二个将其与所涉及的实体的实体嵌入物组合,第三个生成特定于特定于问题的时间嵌入。最后,基于变换器的编码器学习用问题表示来融合生成的时间信息,该问题表示用于答案预测。广泛的实验表明,TempoQR在最先进的方法上通过25-45个百分点提高了25--45个百分点,并且它将更好地概括到未经说明的问题类型。
translated by 谷歌翻译
We present a data-driven framework to automate the vectorization and machine interpretation of 2D engineering part drawings. In industrial settings, most manufacturing engineers still rely on manual reads to identify the topological and manufacturing requirements from drawings submitted by designers. The interpretation process is laborious and time-consuming, which severely inhibits the efficiency of part quotation and manufacturing tasks. While recent advances in image-based computer vision methods have demonstrated great potential in interpreting natural images through semantic segmentation approaches, the application of such methods in parsing engineering technical drawings into semantically accurate components remains a significant challenge. The severe pixel sparsity in engineering drawings also restricts the effective featurization of image-based data-driven methods. To overcome these challenges, we propose a deep learning based framework that predicts the semantic type of each vectorized component. Taking a raster image as input, we vectorize all components through thinning, stroke tracing, and cubic bezier fitting. Then a graph of such components is generated based on the connectivity between the components. Finally, a graph convolutional neural network is trained on this graph data to identify the semantic type of each component. We test our framework in the context of semantic segmentation of text, dimension and, contour components in engineering drawings. Results show that our method yields the best performance compared to recent image, and graph-based segmentation methods.
translated by 谷歌翻译