The success of deep learning heavily relies on large-scale data with comprehensive labels, which is more expensive and time-consuming to fetch in 3D compared to 2D images or natural languages. This promotes the potential of utilizing models pretrained with data more than 3D as teachers for cross-modal knowledge transferring. In this paper, we revisit masked modeling in a unified fashion of knowledge distillation, and we show that foundational Transformers pretrained with 2D images or natural languages can help self-supervised 3D representation learning through training Autoencoders as Cross-Modal Teachers (ACT). The pretrained Transformers are transferred as cross-modal 3D teachers using discrete variational autoencoding self-supervision, during which the Transformers are frozen with prompt tuning for better knowledge inheritance. The latent features encoded by the 3D teachers are used as the target of masked point modeling, wherein the dark knowledge is distilled to the 3D Transformer students as foundational geometry understanding. Our ACT pretrained 3D learner achieves state-of-the-art generalization capacity across various downstream benchmarks, e.g., 88.21% overall accuracy on ScanObjectNN. Codes will be released at https://github.com/RunpeiDong/ACT.
translated by 谷歌翻译
Real-world image super-resolution (RISR) has received increased focus for improving the quality of SR images under unknown complex degradation. Existing methods rely on the heavy SR models to enhance low-resolution (LR) images of different degradation levels, which significantly restricts their practical deployments on resource-limited devices. In this paper, we propose a novel Dynamic Channel Splitting scheme for efficient Real-world Image Super-Resolution, termed DCS-RISR. Specifically, we first introduce the light degradation prediction network to regress the degradation vector to simulate the real-world degradations, upon which the channel splitting vector is generated as the input for an efficient SR model. Then, a learnable octave convolution block is proposed to adaptively decide the channel splitting scale for low- and high-frequency features at each block, reducing computation overhead and memory cost by offering the large scale to low-frequency features and the small scale to the high ones. To further improve the RISR performance, Non-local regularization is employed to supplement the knowledge of patches from LR and HR subspace with free-computation inference. Extensive experiments demonstrate the effectiveness of DCS-RISR on different benchmark datasets. Our DCS-RISR not only achieves the best trade-off between computation/parameter and PSNR/SSIM metric, and also effectively handles real-world images with different degradation levels.
translated by 谷歌翻译
Robust prediction of citywide traffic flows at different time periods plays a crucial role in intelligent transportation systems. While previous work has made great efforts to model spatio-temporal correlations, existing methods still suffer from two key limitations: i) Most models collectively predict all regions' flows without accounting for spatial heterogeneity, i.e., different regions may have skewed traffic flow distributions. ii) These models fail to capture the temporal heterogeneity induced by time-varying traffic patterns, as they typically model temporal correlations with a shared parameterized space for all time periods. To tackle these challenges, we propose a novel Spatio-Temporal Self-Supervised Learning (ST-SSL) traffic prediction framework which enhances the traffic pattern representations to be reflective of both spatial and temporal heterogeneity, with auxiliary self-supervised learning paradigms. Specifically, our ST-SSL is built over an integrated module with temporal and spatial convolutions for encoding the information across space and time. To achieve the adaptive spatio-temporal self-supervised learning, our ST-SSL first performs the adaptive augmentation over the traffic flow graph data at both attribute- and structure-levels. On top of the augmented traffic graph, two SSL auxiliary tasks are constructed to supplement the main traffic prediction task with spatial and temporal heterogeneity-aware augmentation. Experiments on four benchmark datasets demonstrate that ST-SSL consistently outperforms various state-of-the-art baselines. Since spatio-temporal heterogeneity widely exists in practical datasets, the proposed framework may also cast light on other spatial-temporal applications. Model implementation is available at https://github.com/Echo-Ji/ST-SSL.
translated by 谷歌翻译
Air pollution is a crucial issue affecting human health and livelihoods, as well as one of the barriers to economic and social growth. Forecasting air quality has become an increasingly important endeavor with significant social impacts, especially in emerging countries like China. In this paper, we present a novel Transformer architecture termed AirFormer to collectively predict nationwide air quality in China, with an unprecedented fine spatial granularity covering thousands of locations. AirFormer decouples the learning process into two stages -- 1) a bottom-up deterministic stage that contains two new types of self-attention mechanisms to efficiently learn spatio-temporal representations; 2) a top-down stochastic stage with latent variables to capture the intrinsic uncertainty of air quality data. We evaluate AirFormer with 4-year data from 1,085 stations in the Chinese Mainland. Compared to the state-of-the-art model, AirFormer reduces prediction errors by 5%~8% on 72-hour future predictions. Our source code is available at https://github.com/yoshall/airformer.
translated by 谷歌翻译
Learning descriptive 3D features is crucial for understanding 3D scenes with diverse objects and complex structures. However, it is usually unknown whether important geometric attributes and scene context obtain enough emphasis in an end-to-end trained 3D scene understanding network. To guide 3D feature learning toward important geometric attributes and scene context, we explore the help of textual scene descriptions. Given some free-form descriptions paired with 3D scenes, we extract the knowledge regarding the object relationships and object attributes. We then inject the knowledge to 3D feature learning through three classification-based auxiliary tasks. This language-assisted training can be combined with modern object detection and instance segmentation methods to promote 3D semantic scene understanding, especially in a label-deficient regime. Moreover, the 3D feature learned with language assistance is better aligned with the language features, which can benefit various 3D-language multimodal tasks. Experiments on several benchmarks of 3D-only and 3D-language tasks demonstrate the effectiveness of our language-assisted 3D feature learning. Code is available at https://github.com/Asterisci/Language-Assisted-3D.
translated by 谷歌翻译
在自主驾驶场景中,基于点云的主导云的3D对象检测器很大程度上依赖于大量准确标记的样品,但是,点云中的3D注释非常乏味,昂贵且耗时。为了减少对大量监督的依赖,已经提出了基于半监督的学习(SSL)方法。伪标记的方法通常用于SSL框架,但是,教师模型的低质量预测严重限制了其性能。在这项工作中,我们通过将教师模型增强到具有几种必要的设计的熟练培训模型,为半监督3D对象检测提出了一个新的伪标记框架。首先,为了改善伪标签的召回,提出了一个时空集合(Ste)模块来生成足够的种子盒。其次,为了提高召回框的精确度,基于群集的盒子投票(CBV)模块旨在从聚类的种子盒中获得汇总投票。这也消除了精致阈值选择伪标签的必要性。此外,为了减少训练期间错误的伪标记样本的负面影响,通过考虑智慧对比度学习(BCL)提出了软监督信号。在一次和Waymo数据集上验证了我们的模型的有效性。例如,一次,我们的方法将基线显着提高了9.51地图。此外,有了一半的注释,我们的模型在Waymo上的完整注释都优于Oracle模型。
translated by 谷歌翻译
现有的无监督点云预训练的方法被限制在场景级或点/体素级实例歧视上。场景级别的方法往往会失去对识别道路对象至关重要的本地细节,而点/体素级方法固有地遭受了有限的接收领域,而这种接收领域无力感知大型对象或上下文环境。考虑到区域级表示更适合3D对象检测,我们设计了一个新的无监督点云预训练框架,称为proposalcontrast,该框架通过对比的区域建议来学习强大的3D表示。具体而言,通过从每个点云中采样一组详尽的区域建议,每个提案中的几何点关系都是建模用于创建表达性建议表示形式的。为了更好地适应3D检测属性,提案contrast可以通过群体间和统一分离来优化,即提高跨语义类别和对象实例的提议表示的歧视性。在各种3D检测器(即PV-RCNN,Centerpoint,Pointpillars和Pointrcnn)和数据集(即Kitti,Waymo和一次)上验证了提案cont抗对流的概括性和可传递性。
translated by 谷歌翻译
深度学习的成功通常伴随着神经网络深度的增长。但是,传统培训方法仅在最后一层监督神经网络并逐层传播,这导致了优化中间层的困难。最近,已经提出了深层监督,以在深神经网络的中间层中添加辅助分类器。通过通过监督任务损失优化这些辅助分类器,可以将监督直接应用于浅层层。但是,深层监督与众所周知的观察结果冲突,即浅层学习低级特征,而不是任务偏向的高级语义特征。为了解决这个问题,本文提出了一个名为“对比深度监督”的新型培训框架,该框架通过基于增强的对比学习来监督中间层。具有11个模型的九个流行数据集的实验结果证明了其对监督学习,半监督学习和知识蒸馏中一般图像分类,细粒度的图像分类和对象检测的影响。代码已在Github发布。
translated by 谷歌翻译
本文介绍了一种系统集成方法,用于一种6-DOF(自由度)协作机器人,以操作移液液的移液液。它的技术发展是三倍。首先,我们设计了用于握住和触发手动移液器的最终效果。其次,我们利用协作机器人的优势来识别基于公认姿势的实验室姿势和计划的机器人运动。第三,我们开发了基于视觉的分类器来预测和纠正定位误差,从而精确地附着在一次性技巧上。通过实验和分析,我们确认开发的系统,尤其是计划和视觉识别方法,可以帮助确保高精度和柔性液体分配。开发的系统适用于低频,高更改的生化液体分配任务。我们预计它将促进协作机器人的部署进行实验室自动化,从而提高实验效率,而不会显着自定义实验室环境。
translated by 谷歌翻译
数据增强模块用于对比学习将给定的数据示例转换为两个视图,这被认为是必不可少的且不可替代的。但是,多个数据增强的预定组成带来了两个缺点。首先,增强类型的人工选择为模型带来了特定的代表性不变,它们对不同的下游任务具有不同程度的积极和负面影响。在培训期间,平等处理每种类型的增强性,使该模型学习了各种下游任务的非最佳表示,并限制了事先选择增强类型的灵活性。其次,在经典的对比度学习方法中使用的强大数据增强可能会在某些情况下带来太多的不变性,而对于某些下游任务至关重要的细粒度可能会丢失。本文提出了一种通用方法,以考虑在一般的对比学习框架中考虑在何处以及与什么对比来减轻这两个问题。我们首先建议根据每个数据增强的重要性,在模型的不同深度学习不同的增强不变,而不是在骨干中均匀学习代表性不变。然后,我们建议用增强嵌入扩展对比内容,以减少强大数据增强的误导效果。基于几种基线方法的实验表明,我们在分类,检测和分割下游任务上学习更好的各种基准。
translated by 谷歌翻译