The recent prevalence of pretrained language models (PLMs) has dramatically shifted the paradigm of semantic parsing, where the mapping from natural language utterances to structured logical forms is now formulated as a Seq2Seq task. Despite the promising performance, previous PLM-based approaches often suffer from hallucination problems due to their negligence of the structural information contained in the sentence, which essentially constitutes the key semantics of the logical forms. Furthermore, most works treat PLM as a black box in which the generation process of the target logical form is hidden beneath the decoder modules, which greatly hinders the model's intrinsic interpretability. To address these two issues, we propose to incorporate the current PLMs with a hierarchical decoder network. By taking the first-principle structures as the semantic anchors, we propose two novel intermediate supervision tasks, namely Semantic Anchor Extraction and Semantic Anchor Alignment, for training the hierarchical decoders and probing the model intermediate representations in a self-adaptive manner alongside the fine-tuning process. We conduct intensive experiments on several semantic parsing benchmarks and demonstrate that our approach can consistently outperform the baselines. More importantly, by analyzing the intermediate representations of the hierarchical decoders, our approach also makes a huge step toward the intrinsic interpretability of PLMs in the domain of semantic parsing.
translated by 谷歌翻译
当通过玻璃等半充实介质进行成像时,通常可以在捕获的图像中找到另一个场景的反射。它降低了图像的质量并影响其后续分析。在本文中,提出了一种新的深层神经网络方法来解决成像中的反射问题。传统的反射删除方法不仅需要长时间的计算时间来解决不同的优化功能,而且不能保证其性能。由于如今的成像设备可以轻松获得数组摄像机,因此我们首先在本文中建议使用卷积神经网络(CNN)采用基于多图像的深度估计方法。提出的网络避免了由于图像中的反射而引起的深度歧义问题,并直接估计沿图像边缘的深度。然后,它们被用来将边缘分类为属于背景或反射的边缘。由于具有相似深度值的边缘在分类中易于误差,因此将它们从反射删除过程中删除。我们建议使用生成的对抗网络(GAN)来再生删除的背景边缘。最后,估计的背景边缘图被馈送到另一个自动编码器网络,以帮助从原始图像中提取背景。实验结果表明,与最先进的方法相比,提出的反射去除算法在定量和定性上取得了出色的性能。与使用传统优化方法相比,所提出的算法还显示出比现有方法相比的速度要快得多。
translated by 谷歌翻译
在大数据的时代,通过单数值分解的图像近似近似。但是,奇异值分解(SVD)仅用于订单两个数据,即矩阵。有必要将高阶输入变成矩阵或将其分解为一系列订单两个切片,以解决具有SVD的多光谱图像和视频等高阶数据。高阶奇异值分解(HOSVD)扩展了SVD,可以使用一些排名一的组件的总和近似高阶数据。我们考虑将HOSVD推广到有限维度的代数上的问题。该代数(称为T-Algebra)概括了复数。代数的元素(称为t-scalars)是固定大小的复数阵列。可以将矩阵和张量概括在T量标准上,然后扩展许多规范矩阵和张量算法,包括HOSVD,以获得更高的性能版本。 HOSVD的概括称为THOSVD。交替的算法可以进一步提高其近似多路数据的性能。 THOSVD还统一了广泛的主要组件分析算法。为了利用T-scalars进行近似图像利用广义算法的潜力,我们使用像素邻域策略将每个像素转换为“更深入”的T-Scalar。公开图像的实验表明,T型量表的广义算法,即ThoSVD,与其规范对应物进行了优惠。
translated by 谷歌翻译
图形神经网络(GNNS)在具有图形结构数据的各种任务中取得了巨大成功,其中节点分类是必不可少的。无监督的图形域适应(UGDA)显示了其降低节点分类标签成本的实用价值。它利用标记图(即源域)的知识来解决另一个未标记的图形(即目标域)的相同任务。大多数现有的UGDA方法严重依赖于源域中的标记图。它们利用来自源域的标签作为监控信号,并在源图和目标图中共同培训。但是,在一些真实的场景中,由于无法使用或隐私问题,源图无法访问。因此,我们提出了一种名为Source Firect Insuperved Graph域适应(SFUGDA)的新颖情景。在这种情况下,我们可以从源域中杠杆的唯一信息是训练有素的源模型,而不会曝光源图和标签。结果,现有的UGDA方法不再可行。为了解决本实际情况的非琐碎的适应挑战,我们提出了一种模型 - 无话学算法,用于域适应,以充分利用源模型的辨别能力,同时保留目标图上的结构接近度的一致性。我们在理论和经验上证明了所提出的算法的有效性。四个跨域任务的实验结果显示了宏F1得分的一致性改进,高达0.17。
translated by 谷歌翻译
由于点云数据的稀缺性质,在大规模环境中使用激光雷达识别使用激光雷达的地方是具有挑战性的。在本文中,我们提出了BVMATCH,基于LIDAR的帧到帧位置识别框架,其能够估计2D相对姿势。基于地面区域可以近似作为平面的假设,我们将地面区域统一地分散到网格和项目3D LIDAR扫描到鸟瞰图(BV)图像。我们进一步使用了一组Log-Gabor过滤器来构建一个最大索引图(MIM),用于编码图像中结构的方向信息。我们从理论上分析MIM的方向特征,并引入了一种名为鸟瞰图特征变换(BVFT)的新颖描述符。所提出的BVFT对BV图像的旋转和强度变化不敏感。利用BVFT描述符,统一LIDAR将识别和将估算任务统一到BVMATCT框架中。在三个大规模数据集上进行的实验表明,BVMATCH在召回的位置识别和姿势估计精度的召回速率方面优于最先进的方法。
translated by 谷歌翻译
多光谱和多模式图像处理在计算机视觉和计算摄影社区中很重要。由于所获得的多级和多模式数据通常由于图像设备的交替或移动而被误报,因此需要图像登记过程。由于非线性强度和梯度变化,多光谱或多模式图像的登记是具有挑战性的。为了应对这一挑战,我们提出了相变网络(PCNet),该网络(PCNET)能够增强结构相似性并减轻非线性强度和梯度变化。然后可以使用网络产生的相似性增强特征对齐图像。 PCNET在先前的相一致性的指导下构建。根据相中理论,网络包含三个可用的学习Gabor内核附带的三层可训练层。由于先前的知识,PCNet非常轻量。可以将PCNET视为完全卷积的,因此可以取消任意尺寸。一旦接受训练,PCNet适用于各种多级和多模式数据,如RGB / NIR和Flash / No-Flash图像,而无需额外的进一步调整。实验结果验证了PCNet优于当前最先进的登记算法。
translated by 谷歌翻译
In this paper, we propose a novel framework dubbed peer learning to deal with the problem of biased scene graph generation (SGG). This framework uses predicate sampling and consensus voting (PSCV) to encourage different peers to learn from each other, improving model diversity and mitigating bias in SGG. To address the heavily long-tailed distribution of predicate classes, we propose to use predicate sampling to divide and conquer this issue. As a result, the model is less biased and makes more balanced predicate predictions. Specifically, one peer may not be sufficiently diverse to discriminate between different levels of predicate distributions. Therefore, we sample the data distribution based on frequency of predicates into sub-distributions, selecting head, body, and tail classes to combine and feed to different peers as complementary predicate knowledge during the training process. The complementary predicate knowledge of these peers is then ensembled utilizing a consensus voting strategy, which simulates a civilized voting process in our society that emphasizes the majority opinion and diminishes the minority opinion. This approach ensures that the learned representations of each peer are optimally adapted to the various data distributions. Extensive experiments on the Visual Genome dataset demonstrate that PSCV outperforms previous methods. We have established a new state-of-the-art (SOTA) on the SGCls task by achieving a mean of \textbf{31.6}.
translated by 谷歌翻译
Audio-Visual scene understanding is a challenging problem due to the unstructured spatial-temporal relations that exist in the audio signals and spatial layouts of different objects and various texture patterns in the visual images. Recently, many studies have focused on abstracting features from convolutional neural networks while the learning of explicit semantically relevant frames of sound signals and visual images has been overlooked. To this end, we present an end-to-end framework, namely attentional graph convolutional network (AGCN), for structure-aware audio-visual scene representation. First, the spectrogram of sound and input image is processed by a backbone network for feature extraction. Then, to build multi-scale hierarchical information of input features, we utilize an attention fusion mechanism to aggregate features from multiple layers of the backbone network. Notably, to well represent the salient regions and contextual information of audio-visual inputs, the salient acoustic graph (SAG) and contextual acoustic graph (CAG), salient visual graph (SVG), and contextual visual graph (CVG) are constructed for the audio-visual scene representation. Finally, the constructed graphs pass through a graph convolutional network for structure-aware audio-visual scene recognition. Extensive experimental results on the audio, visual and audio-visual scene recognition datasets show that promising results have been achieved by the AGCN methods. Visualizing graphs on the spectrograms and images have been presented to show the effectiveness of proposed CAG/SAG and CVG/SVG that could focus on the salient and semantic relevant regions.
translated by 谷歌翻译
Coverage path planning is a major application for mobile robots, which requires robots to move along a planned path to cover the entire map. For large-scale tasks, coverage path planning benefits greatly from multiple robots. In this paper, we describe Turn-minimizing Multirobot Spanning Tree Coverage Star(TMSTC*), an improved multirobot coverage path planning (mCPP) algorithm based on the MSTC*. Our algorithm partitions the map into minimum bricks as tree's branches and thereby transforms the problem into finding the maximum independent set of bipartite graph. We then connect bricks with greedy strategy to form a tree, aiming to reduce the number of turns of corresponding circumnavigating coverage path. Our experimental results show that our approach enables multiple robots to make fewer turns and thus complete terrain coverage tasks faster than other popular algorithms.
translated by 谷歌翻译
Practices in the built environment have become more digitalized with the rapid development of modern design and construction technologies. However, the requirement of practitioners or scholars to gather complicated professional knowledge in the built environment has not been satisfied yet. In this paper, more than 80,000 paper abstracts in the built environment field were obtained to build a knowledge graph, a knowledge base storing entities and their connective relations in a graph-structured data model. To ensure the retrieval accuracy of the entities and relations in the knowledge graph, two well-annotated datasets have been created, containing 2,000 instances and 1,450 instances each in 29 relations for the named entity recognition task and relation extraction task respectively. These two tasks were solved by two BERT-based models trained on the proposed dataset. Both models attained an accuracy above 85% on these two tasks. More than 200,000 high-quality relations and entities were obtained using these models to extract all abstract data. Finally, this knowledge graph is presented as a self-developed visualization system to reveal relations between various entities in the domain. Both the source code and the annotated dataset can be found here: https://github.com/HKUST-KnowComp/BEKG.
translated by 谷歌翻译