在这项工作中,我们介绍了一个新颖的全球描述符,称为3D位置识别的稳定三角形描述符(STD)。对于一个三角形,其形状由侧面或包含角度的长度唯一决定。此外,三角形的形状对于刚性转换完全不变。基于此属性,我们首先设计了一种算法,以从3D点云中有效提取本地密钥点,并将这些关键点编码为三角形描述符。然后,通过匹配点云之间描述符的侧面长度(以及其他一些信息)来实现位置识别。从描述符匹配对获得的点对应关系可以在几何验证中进一步使用,从而大大提高了位置识别的准确性。在我们的实验中,我们将我们提出的系统与公共数据集(即Kitti,NCLT和Complex-ublan)和我们自我收集的数据集(即M2DP,扫描上下文)进行了广泛的比较(即M2DP,扫描上下文)(即带有非重复扫描固态激光雷达)。所有定量结果表明,性病具有更强的适应性,并且在其对应物方面的精度有了很大的提高。为了分享我们的发现并为社区做出贡献,我们在GitHub上开放代码:https://github.com/hku-mars/std。
translated by 谷歌翻译
本文提出了一种有效的概率自适应体素映射方法,用于激光雷达的探光法。该地图是体素的集合;每个都包含一个平面(或边缘)功能,该特征可以实现环境的概率表示以及新的LIDAR扫描的准确配置。我们进一步分析了对粗到1的体素映射的需求,然后使用哈希表和动手组织的新型体素图来有效地构建和更新地图。我们将提出的体素图应用于迭代的扩展卡尔曼滤波器,并为姿势估计构建最大后验概率问题。与其他最先进的方法相比,开放Kitti数据集的实验显示了我们方法的高精度和效率。在具有非重复扫描激光雷达的非结构化环境上进行的室外实验进一步验证了我们的映射方法对不同环境和LIDAR扫描模式的适应性。我们的代码和数据集在GitHub上开源
translated by 谷歌翻译
确定多个激光痛和相机之间的外在参数对于自主机器人至关重要,尤其是对于固态激光痛,每个LIDAR单元具有很小的视野(FOV)(FOV),并且通常集体使用多个单元。对于360 $^\ circ $机械旋转激光盆,提出了大多数外部校准方法,其中假定FOV与其他LIDAR或相机传感器重叠。很少有研究工作集中在校准小型FOV激光痛和摄像头,也没有提高校准速度。在这项工作中,我们考虑了小型FOV激光痛和相机之间外部校准的问题,目的是缩短总校准时间并进一步提高校准精度。我们首先在LIDAR特征点的提取和匹配中实现自适应体素化技术。这样的过程可以避免在激光痛外校准中冗余创建$ k $ d树,并以比现有方法更可靠和快速提取激光雷达特征点。然后,我们将多个LIDAR外部校准制成LIDAR束调节(BA)问题。通过将成本函数得出最高为二阶,可以进一步提高非线性最小平方问题的求解时间和精度。我们提出的方法已在四个无目标场景和两种类型的固态激光雷达中收集的数据进行了验证,这些扫描模式,密度和FOV完全不同。在八个初始设置下,我们工作的鲁棒性也得到了验证,每个设置包含100个独立试验。与最先进的方法相比,我们的工作提高了激光雷达外部校准的校准速度15倍,激光摄像机外部校准(由50个独立试验产生的平均),同时保持准确,同时保持准确。
translated by 谷歌翻译
Graph Neural Networks (GNNs) have shown satisfying performance on various graph learning tasks. To achieve better fitting capability, most GNNs are with a large number of parameters, which makes these GNNs computationally expensive. Therefore, it is difficult to deploy them onto edge devices with scarce computational resources, e.g., mobile phones and wearable smart devices. Knowledge Distillation (KD) is a common solution to compress GNNs, where a light-weighted model (i.e., the student model) is encouraged to mimic the behavior of a computationally expensive GNN (i.e., the teacher GNN model). Nevertheless, most existing GNN-based KD methods lack fairness consideration. As a consequence, the student model usually inherits and even exaggerates the bias from the teacher GNN. To handle such a problem, we take initial steps towards fair knowledge distillation for GNNs. Specifically, we first formulate a novel problem of fair knowledge distillation for GNN-based teacher-student frameworks. Then we propose a principled framework named RELIANT to mitigate the bias exhibited by the student model. Notably, the design of RELIANT is decoupled from any specific teacher and student model structures, and thus can be easily adapted to various GNN-based KD frameworks. We perform extensive experiments on multiple real-world datasets, which corroborates that RELIANT achieves less biased GNN knowledge distillation while maintaining high prediction utility.
translated by 谷歌翻译
To generate high quality rendering images for real time applications, it is often to trace only a few samples-per-pixel (spp) at a lower resolution and then supersample to the high resolution. Based on the observation that the rendered pixels at a low resolution are typically highly aliased, we present a novel method for neural supersampling based on ray tracing 1/4-spp samples at the high resolution. Our key insight is that the ray-traced samples at the target resolution are accurate and reliable, which makes the supersampling an interpolation problem. We present a mask-reinforced neural network to reconstruct and interpolate high-quality image sequences. First, a novel temporal accumulation network is introduced to compute the correlation between current and previous features to significantly improve their temporal stability. Then a reconstruct network based on a multi-scale U-Net with skip connections is adopted for reconstruction and generation of the desired high-resolution image. Experimental results and comparisons have shown that our proposed method can generate higher quality results of supersampling, without increasing the total number of ray-tracing samples, over current state-of-the-art methods.
translated by 谷歌翻译
Panoptic Part Segmentation (PPS) unifies panoptic segmentation and part segmentation into one task. Previous works utilize separated approaches to handle thing, stuff, and part predictions without shared computation and task association. We aim to unify these tasks at the architectural level, designing the first end-to-end unified framework named Panoptic-PartFormer. Moreover, we find the previous metric PartPQ biases to PQ. To handle both issues, we make the following contributions: Firstly, we design a meta-architecture that decouples part feature and things/stuff feature, respectively. We model things, stuff, and parts as object queries and directly learn to optimize all three forms of prediction as a unified mask prediction and classification problem. We term our model as Panoptic-PartFormer. Secondly, we propose a new metric Part-Whole Quality (PWQ) to better measure such task from both pixel-region and part-whole perspectives. It can also decouple the error for part segmentation and panoptic segmentation. Thirdly, inspired by Mask2Former, based on our meta-architecture, we propose Panoptic-PartFormer++ and design a new part-whole cross attention scheme to further boost part segmentation qualities. We design a new part-whole interaction method using masked cross attention. Finally, the extensive ablation studies and analysis demonstrate the effectiveness of both Panoptic-PartFormer and Panoptic-PartFormer++. Compared with previous Panoptic-PartFormer, our Panoptic-PartFormer++ achieves 2% PartPQ and 3% PWQ improvements on the Cityscapes PPS dataset and 5% PartPQ on the Pascal Context PPS dataset. On both datasets, Panoptic-PartFormer++ achieves new state-of-the-art results with a significant cost drop of 70% on GFlops and 50% on parameters. Our models can serve as a strong baseline and aid future research in PPS. Code will be available.
translated by 谷歌翻译
An increasing number of public datasets have shown a marked clinical impact on assessing anatomical structures. However, each of the datasets is small, partially labeled, and rarely investigates severe tumor subjects. Moreover, current models are limited to segmenting specific organs/tumors, which can not be extended to novel domains and classes. To tackle these limitations, we introduce embedding learned from Contrastive Language-Image Pre-training (CLIP) to segmentation models, dubbed the CLIP-Driven Universal Model. The Universal Model can better segment 25 organs and 6 types of tumors by exploiting the semantic relationship between abdominal structures. The model is developed from an assembly of 14 datasets with 3,410 CT scans and evaluated on 6,162 external CT scans from 3 datasets. We rank first on the public leaderboard of the Medical Segmentation Decathlon (MSD) and achieve the state-of-the-art results on Beyond The Cranial Vault (BTCV). Compared with dataset-specific models, the Universal Model is computationally more efficient (6x faster), generalizes better to CT scans from varying sites, and shows stronger transfer learning performance on novel tasks. The design of CLIP embedding enables the Universal Model to be easily extended to new classes without catastrophically forgetting the previously learned classes.
translated by 谷歌翻译
This paper illustrates the technologies of user next intent prediction with a concept knowledge graph. The system has been deployed on the Web at Alipay, serving more than 100 million daily active users. Specifically, we propose AlipayKG to explicitly characterize user intent, which is an offline concept knowledge graph in the Life-Service domain modeling the historical behaviors of users, the rich content interacted by users and the relations between them. We further introduce a Transformer-based model which integrates expert rules from the knowledge graph to infer the online user's next intent. Experimental results demonstrate that the proposed system can effectively enhance the performance of the downstream tasks while retaining explainability.
translated by 谷歌翻译
Medical image segmentation (MIS) is essential for supporting disease diagnosis and treatment effect assessment. Despite considerable advances in artificial intelligence (AI) for MIS, clinicians remain skeptical of its utility, maintaining low confidence in such black box systems, with this problem being exacerbated by low generalization for out-of-distribution (OOD) data. To move towards effective clinical utilization, we propose a foundation model named EvidenceCap, which makes the box transparent in a quantifiable way by uncertainty estimation. EvidenceCap not only makes AI visible in regions of uncertainty and OOD data, but also enhances the reliability, robustness, and computational efficiency of MIS. Uncertainty is modeled explicitly through subjective logic theory to gather strong evidence from features. We show the effectiveness of EvidenceCap in three segmentation datasets and apply it to the clinic. Our work sheds light on clinical safe applications and explainable AI, and can contribute towards trustworthiness in the medical domain.
translated by 谷歌翻译
Depression is a leading cause of death worldwide, and the diagnosis of depression is nontrivial. Multimodal learning is a popular solution for automatic diagnosis of depression, and the existing works suffer two main drawbacks: 1) the high-order interactions between different modalities can not be well exploited; and 2) interpretability of the models are weak. To remedy these drawbacks, we propose a multimodal multi-order factor fusion (MMFF) method. Our method can well exploit the high-order interactions between different modalities by extracting and assembling modality factors under the guide of a shared latent proxy. We conduct extensive experiments on two recent and popular datasets, E-DAIC-WOZ and CMDC, and the results show that our method achieve significantly better performance compared with other existing approaches. Besides, by analyzing the process of factor assembly, our model can intuitively show the contribution of each factor. This helps us understand the fusion mechanism.
translated by 谷歌翻译