The recent success of pre-trained 2D vision models is mostly attributable to learning from large-scale datasets. However, compared with 2D image datasets, the current pre-training data of 3D point cloud is limited. To overcome this limitation, we propose a knowledge distillation method for 3D point cloud pre-trained models to acquire knowledge directly from the 2D representation learning model, particularly the image encoder of CLIP, through concept alignment. Specifically, we introduce a cross-attention mechanism to extract concept features from 3D point cloud and compare them with the semantic information from 2D images. In this scheme, the point cloud pre-trained models learn directly from rich information contained in 2D teacher models. Extensive experiments demonstrate that the proposed knowledge distillation scheme achieves higher accuracy than the state-of-the-art 3D pre-training methods for synthetic and real-world datasets on downstream tasks, including object classification, object detection, semantic segmentation, and part segmentation.
translated by 谷歌翻译
The pretraining-finetuning paradigm has demonstrated great success in NLP and 2D image fields because of the high-quality representation ability and transferability of their pretrained models. However, pretraining such a strong model is difficult in the 3D point cloud field since the training data is limited and point cloud collection is expensive. This paper introduces \textbf{E}fficient \textbf{P}oint \textbf{C}loud \textbf{L}earning (EPCL), an effective and efficient point cloud learner for directly training high-quality point cloud models with a frozen CLIP model. Our EPCL connects the 2D and 3D modalities by semantically aligning the 2D features and point cloud features without paired 2D-3D data. Specifically, the input point cloud is divided into a sequence of tokens and directly fed into the frozen CLIP model to learn point cloud representation. Furthermore, we design a task token to narrow the gap between 2D images and 3D point clouds. Comprehensive experiments on 3D detection, semantic segmentation, classification and few-shot learning demonstrate that the 2D CLIP model can be an efficient point cloud backbone and our method achieves state-of-the-art accuracy on both real-world and synthetic downstream tasks. Code will be available.
translated by 谷歌翻译
This paper focuses on the task of survival time analysis for lung cancer. Although much progress has been made in this problem in recent years, the performance of existing methods is still far from satisfactory. Traditional and some deep learning-based survival time analyses for lung cancer are mostly based on textual clinical information such as staging, age, histology, etc. Unlike existing methods that predicting on the single modality, we observe that a human clinician usually takes multimodal data such as text clinical data and visual scans to estimate survival time. Motivated by this, in this work, we contribute a smart cross-modality network for survival analysis network named Lite-ProSENet that simulates a human's manner of decision making. Extensive experiments were conducted using data from 422 NSCLC patients from The Cancer Imaging Archive (TCIA). The results show that our Lite-ProSENet outperforms favorably again all comparison methods and achieves the new state of the art with the 89.3% on concordance. The code will be made publicly available.
translated by 谷歌翻译
自动肿瘤或病变分割是用于计算机辅助诊断的医学图像分析的关键步骤。尽管基于卷积神经网络(CNN)的现有方法已经达到了最先进的表现,但医疗肿瘤分割中仍然存在许多挑战。这是因为,尽管人类视觉系统可以有效地检测到2D图像中的对称性,但常规CNN只能利用翻译不变性,忽略医学图像中存在的进一步固有的对称性,例如旋转和反射。为了解决这个问题,我们通过编码那些固有的对称性来学习更精确的表示形式,提出了一个新型的群体模棱两可的分割框架。首先,在每个方向上都设计了基于内核的模棱两可的操作,这使其能够有效地解决现有方法中学习对称性的差距。然后,为了保持全球分割网络,我们设计具有层面对称性约束的独特组层。最后,基于我们的新框架,对现实世界临床数据进行的广泛实验表明,一个群体含量的res-unet(名为GER-UNET)优于其基于CNN的常规对应物,并且在最新的分段方法中优于其最新的分段方法。肝肿瘤分割,COVID-19肺部感染分割和视网膜血管检测的任务。更重要的是,新建的GER-UNET还显示出在降低样品复杂性和过滤器的冗余,升级当前分割CNN和划定器官上的其他医学成像方式上的潜力。
translated by 谷歌翻译
生成一组高质量的对应关系或匹配是点云注册中最关键的步骤之一。本文通过共同考虑点对立的结构匹配来提出学习框架COTREG,以预测3D点云登记的对应关系。具体地,我们将这两个匹配转换为基于Wasserstein距离和基于Gromov-Wasserstein距离的优化。因此,建立对应关系的任务可以自然地重塑成耦合的最佳运输问题。此外,我们设计一个网络,以预测点云的每个点的置信度,其提供重叠区域信息以产生对应关系。我们的对应预测管道可以很容易地集成到基于学习的特征,如FCGF或FPFH等传统描述符。我们在3DMATCH,KITTI,3DCSR和ModelNet40基准上进行了全面的实验,显示了所提出的方法的最先进的性能。
translated by 谷歌翻译
准确和高效的点云注册是一个挑战,因为噪音和大量积分影响了对应搜索。这一挑战仍然是一个剩余的研究问题,因为大多数现有方法都依赖于对应搜索。为了解决这一挑战,我们通过调查深生成的神经网络来点云注册来提出新的数据驱动登记算法。给定两个点云,动机是直接生成对齐的点云,这在许多应用中非常有用,如3D匹配和搜索。我们设计了一个端到端的生成神经网络,用于对齐点云生成以实现这种动机,包含三种新组件。首先,提出了一种点多感知层(MLP)混频器(PointMixer)网络以便在自点云中有效地维护全局和局部结构信息。其次,提出了一种特征交互模块来融合来自交叉点云的信息。第三,提出了一种并行和差分样本共识方法来基于所生成的登记结果计算输入点云的变换矩阵。所提出的生成神经网络通过维持数据分布和结构相似度,在GAN框架中训练。 ModelNet40和7Scene数据集的实验表明,所提出的算法实现了最先进的准确性和效率。值得注意的是,与基于最先进的对应的算法相比,我们的方法减少了注册错误(CD)的$ 2 \次数为$ 12 \倍运行时间。
translated by 谷歌翻译
现有的最先进的点描述符仅依赖于结构信息,从而省略纹理信息。然而,纹理信息对于我们的人类来区分场景部分至关重要。此外,基于学习的点描述符是尚不清楚原始点如何贡献到最终描述符的黑框。在本文中,我们提出了一种新的多模式融合方法,通过考虑结构和纹理信息来生成点云注册描述符。具体地,设计一种新的关注融合模块,用于提取描述符提取的加权纹理信息。此外,我们提出了一个可解释的模块来解释有助于最终描述符的原始点。我们使用描述符元素作为对目标层的丢失丢失,并将梯度视为对最终描述符的这一点的重要性。本文进一步移动了一步,以解释注册任务中的深度学习。 3DMATCH,3DLomatch和Kitti的综合实验表明,多模式融合描述符实现最先进的准确性并提高描述符的独特性。我们还表明我们的可解释模块在解释注册描述符提取时。
translated by 谷歌翻译
This paper focuses on designing efficient models with low parameters and FLOPs for dense predictions. Even though CNN-based lightweight methods have achieved stunning results after years of research, trading-off model accuracy and constrained resources still need further improvements. This work rethinks the essential unity of efficient Inverted Residual Block in MobileNetv2 and effective Transformer in ViT, inductively abstracting a general concept of Meta-Mobile Block, and we argue that the specific instantiation is very important to model performance though sharing the same framework. Motivated by this phenomenon, we deduce a simple yet efficient modern \textbf{I}nverted \textbf{R}esidual \textbf{M}obile \textbf{B}lock (iRMB) for mobile applications, which absorbs CNN-like efficiency to model short-distance dependency and Transformer-like dynamic modeling capability to learn long-distance interactions. Furthermore, we design a ResNet-like 4-phase \textbf{E}fficient \textbf{MO}del (EMO) based only on a series of iRMBs for dense applications. Massive experiments on ImageNet-1K, COCO2017, and ADE20K benchmarks demonstrate the superiority of our EMO over state-of-the-art methods, \eg, our EMO-1M/2M/5M achieve 71.5, 75.1, and 78.4 Top-1 that surpass \textbf{SoTA} CNN-/Transformer-based models, while trading-off the model accuracy and efficiency well.
translated by 谷歌翻译
Supervised Question Answering systems (QA systems) rely on domain-specific human-labeled data for training. Unsupervised QA systems generate their own question-answer training pairs, typically using secondary knowledge sources to achieve this outcome. Our approach (called PIE-QG) uses Open Information Extraction (OpenIE) to generate synthetic training questions from paraphrased passages and uses the question-answer pairs as training data for a language model for a state-of-the-art QA system based on BERT. Triples in the form of <subject, predicate, object> are extracted from each passage, and questions are formed with subjects (or objects) and predicates while objects (or subjects) are considered as answers. Experimenting on five extractive QA datasets demonstrates that our technique achieves on-par performance with existing state-of-the-art QA systems with the benefit of being trained on an order of magnitude fewer documents and without any recourse to external reference data sources.
translated by 谷歌翻译
Transformer has achieved impressive successes for various computer vision tasks. However, most of existing studies require to pretrain the Transformer backbone on a large-scale labeled dataset (e.g., ImageNet) for achieving satisfactory performance, which is usually unavailable for medical images. Additionally, due to the gap between medical and natural images, the improvement generated by the ImageNet pretrained weights significantly degrades while transferring the weights to medical image processing tasks. In this paper, we propose Bootstrap Own Latent of Transformer (BOLT), a self-supervised learning approach specifically for medical image classification with the Transformer backbone. Our BOLT consists of two networks, namely online and target branches, for self-supervised representation learning. Concretely, the online network is trained to predict the target network representation of the same patch embedding tokens with a different perturbation. To maximally excavate the impact of Transformer from limited medical data, we propose an auxiliary difficulty ranking task. The Transformer is enforced to identify which branch (i.e., online/target) is processing the more difficult perturbed tokens. Overall, the Transformer endeavours itself to distill the transformation-invariant features from the perturbed tokens to simultaneously achieve difficulty measurement and maintain the consistency of self-supervised representations. The proposed BOLT is evaluated on three medical image processing tasks, i.e., skin lesion classification, knee fatigue fracture grading and diabetic retinopathy grading. The experimental results validate the superiority of our BOLT for medical image classification, compared to ImageNet pretrained weights and state-of-the-art self-supervised learning approaches.
translated by 谷歌翻译