点云和RGB图像是自主驾驶中的两个普遍感知来源。前者可以提供对象的准确定位,而后者在语义信息方面更加浓密,更丰富。最近,AutoAlign提出了可学习的范式,以结合这两种用于3D对象检测的方式。但是,它遭受了全球关注所引入的高计算成本。为了解决问题,我们在这项工作中提出了跨域变形模块。它针对跨模式关系建模的稀疏可学习抽样点,这增强了对校准误差的耐受性,并大大加快了不同方式的特征聚集。为了在多模式设置下克服复杂的GT-EAG,我们设计了一个简单而有效的跨模式增强策略,鉴于其深度信息,图像贴片的凸组合。此外,通过执行新型的图像级辍学训练方案,我们的模型能够以动态的方式推断。为此,我们提出了AutoAlignv2,这是一个更快,更强大的多模式3D检测框架,该框架构建在自动Autoalign之上。对Nuscenes基准测试的广泛实验证明了自动alignv2的有效性和效率。值得注意的是,我们的最佳模型在Nuscenes测试排行榜上达到了72.4 ND,在所有已发布的多模式3D对象探测器中都取得了新的最新结果。代码将在https://github.com/zehuichen123/autoalignv2上找到。
translated by 谷歌翻译
通过新兴的大规模自动驾驶数据集和深度学习技术的快速发展,单眼3D对象检测(MONO3D)取得了巨大的改进。但是,由于严重的域间隙(例如,视野(FOV),像素大小和数据集中的对象大小)引起的,MONO3D检测器的泛化难度,导致对看不见的域的性能急剧下降。为了解决这些问题,我们将位置不变的变换和多尺度训练与像素大小的深度策略相结合,以构建有效的统一摄像机将军(CGP)。它充分考虑了不同摄像机捕获的图像的FOV和像素大小的差异。此外,当通过详尽的系统研究交叉描述时,我们进一步研究了定量指标的障碍。我们发现预测的大小偏见会导致巨大的失败。因此,我们提出了2d-3d几何符合对象缩放策略(GCO),以通过实例级级增强来弥合差距。我们称为DGMono3D的方法在所有评估的数据集上都能达到出色的性能,并且即使没有在目标域上使用数据,也超过了无监督域的适应方案。
translated by 谷歌翻译
来自多个图像视图的3D对象检测是视觉场景理解的一项基本且具有挑战性的任务。由于其低成本和高效率,多视图3D对象检测已证明了有希望的应用程序前景。但是,由于缺乏深度信息,通过3D空间中的透视图准确地检测对象非常困难。最近,DETR3D引入了一个新颖的3D-2D查询范式,用于汇总3D对象检测的多视图图像并实现最新性能。在本文中,通过密集的试点实验,我们量化了位于不同区域的对象,发现“截断实例”(即,在每个图像的边界区域)是阻碍DETR3D性能的主要瓶颈。尽管它从重叠区域中的两个相邻视图中合并了多个功能,但DETR3D仍然遭受功能聚合不足的折磨,因此缺少机会完全提高检测性能。为了解决该问题,我们建议通过图形结构学习(GSL)自动汇总多视图图像信息。它在每个对象查询和2D特征图之间构造动态3D图,以增强对象表示,尤其是在边界区域。此外,Graph-Detr3D受益于新型的深度不变的多尺度训练策略,该策略通过同时缩放图像大小和对象深度来保持视觉深度一致性。在Nuscenes数据集上进行的广泛实验证明了我们的图形 - detr3D的有效性和效率。值得注意的是,我们最好的模型在Nuscenes测试排行榜上达到了49.5 ND,与各种已发布的图像视图3D对象探测器相比,获得了新的最新技术。
translated by 谷歌翻译
预训练已成为许多计算机视觉任务中的标准范式。但是,大多数方法通常都设计在RGB图像域上。由于二维图像平面和三维空间之间的差异,这种预先训练的模型无法感知空间信息,并用作3D相关任务的子最优解。为了弥合这种差距,我们的目标是学习可以描述三维空间的空间感知视觉表示,并且对这些任务更适合和有效。为了利用点云,在与图像相比提供空间信息时更有优越,我们提出了一个简单而有效的2D图像和3D点云无监督的预训练策略,称为Simipu。具体而言,我们开发了一种多模态对比学习框架,包括模态空间感知模块,用于从点云和模态特征交互模块中学习空间感知表示,以从点传输感知空间信息的能力云编码器分别到图像编码器。匹配算法和投影矩阵建立了用于对比损耗的正对。整个框架培训以无人监督的端到端时尚。据我们所知,这是第一项探索户外多模态数据集的对比学习训练策略的研究,其中包含配对的相机图像和LIDAR点云。 HTTPS://github.com/zhever/simipu提供代码和模型。
translated by 谷歌翻译
Benefiting from the intrinsic supervision information exploitation capability, contrastive learning has achieved promising performance in the field of deep graph clustering recently. However, we observe that two drawbacks of the positive and negative sample construction mechanisms limit the performance of existing algorithms from further improvement. 1) The quality of positive samples heavily depends on the carefully designed data augmentations, while inappropriate data augmentations would easily lead to the semantic drift and indiscriminative positive samples. 2) The constructed negative samples are not reliable for ignoring important clustering information. To solve these problems, we propose a Cluster-guided Contrastive deep Graph Clustering network (CCGC) by mining the intrinsic supervision information in the high-confidence clustering results. Specifically, instead of conducting complex node or edge perturbation, we construct two views of the graph by designing special Siamese encoders whose weights are not shared between the sibling sub-networks. Then, guided by the high-confidence clustering information, we carefully select and construct the positive samples from the same high-confidence cluster in two views. Moreover, to construct semantic meaningful negative sample pairs, we regard the centers of different high-confidence clusters as negative samples, thus improving the discriminative capability and reliability of the constructed sample pairs. Lastly, we design an objective function to pull close the samples from the same cluster while pushing away those from other clusters by maximizing and minimizing the cross-view cosine similarity between positive and negative samples. Extensive experimental results on six datasets demonstrate the effectiveness of CCGC compared with the existing state-of-the-art algorithms.
translated by 谷歌翻译
As one of the prevalent methods to achieve automation systems, Imitation Learning (IL) presents a promising performance in a wide range of domains. However, despite the considerable improvement in policy performance, the corresponding research on the explainability of IL models is still limited. Inspired by the recent approaches in explainable artificial intelligence methods, we proposed a model-agnostic explaining framework for IL models called R2RISE. R2RISE aims to explain the overall policy performance with respect to the frames in demonstrations. It iteratively retrains the black-box IL model from the randomized masked demonstrations and uses the conventional evaluation outcome environment returns as the coefficient to build an importance map. We also conducted experiments to investigate three major questions concerning frames' importance equality, the effectiveness of the importance map, and connections between importance maps from different IL models. The result shows that R2RISE successfully distinguishes important frames from the demonstrations.
translated by 谷歌翻译
Increasing research interests focus on sequential recommender systems, aiming to model dynamic sequence representation precisely. However, the most commonly used loss function in state-of-the-art sequential recommendation models has essential limitations. To name a few, Bayesian Personalized Ranking (BPR) loss suffers the vanishing gradient problem from numerous negative sampling and predictionbiases; Binary Cross-Entropy (BCE) loss subjects to negative sampling numbers, thereby it is likely to ignore valuable negative examples and reduce the training efficiency; Cross-Entropy (CE) loss only focuses on the last timestamp of the training sequence, which causes low utilization of sequence information and results in inferior user sequence representation. To avoid these limitations, in this paper, we propose to calculate Cumulative Cross-Entropy (CCE) loss over the sequence. CCE is simple and direct, which enjoys the virtues of painless deployment, no negative sampling, and effective and efficient training. We conduct extensive experiments on five benchmark datasets to demonstrate the effectiveness and efficiency of CCE. The results show that employing CCE loss on three state-of-the-art models GRU4Rec, SASRec, and S3-Rec can reach 125.63%, 69.90%, and 33.24% average improvement of full ranking NDCG@5, respectively. Using CCE, the performance curve of the models on the test data increases rapidly with the wall clock time, and is superior to that of other loss functions in almost the whole process of model training.
translated by 谷歌翻译
Face Anti-spoofing (FAS) is essential to secure face recognition systems from various physical attacks. However, recent research generally focuses on short-distance applications (i.e., phone unlocking) while lacking consideration of long-distance scenes (i.e., surveillance security checks). In order to promote relevant research and fill this gap in the community, we collect a large-scale Surveillance High-Fidelity Mask (SuHiFiMask) dataset captured under 40 surveillance scenes, which has 101 subjects from different age groups with 232 3D attacks (high-fidelity masks), 200 2D attacks (posters, portraits, and screens), and 2 adversarial attacks. In this scene, low image resolution and noise interference are new challenges faced in surveillance FAS. Together with the SuHiFiMask dataset, we propose a Contrastive Quality-Invariance Learning (CQIL) network to alleviate the performance degradation caused by image quality from three aspects: (1) An Image Quality Variable module (IQV) is introduced to recover image information associated with discrimination by combining the super-resolution network. (2) Using generated sample pairs to simulate quality variance distributions to help contrastive learning strategies obtain robust feature representation under quality variation. (3) A Separate Quality Network (SQN) is designed to learn discriminative features independent of image quality. Finally, a large number of experiments verify the quality of the SuHiFiMask dataset and the superiority of the proposed CQIL.
translated by 谷歌翻译
Unsupervised domain adaptation (UDA) via deep learning has attracted appealing attention for tackling domain-shift problems caused by distribution discrepancy across different domains. Existing UDA approaches highly depend on the accessibility of source domain data, which is usually limited in practical scenarios due to privacy protection, data storage and transmission cost, and computation burden. To tackle this issue, many source-free unsupervised domain adaptation (SFUDA) methods have been proposed recently, which perform knowledge transfer from a pre-trained source model to unlabeled target domain with source data inaccessible. A comprehensive review of these works on SFUDA is of great significance. In this paper, we provide a timely and systematic literature review of existing SFUDA approaches from a technical perspective. Specifically, we categorize current SFUDA studies into two groups, i.e., white-box SFUDA and black-box SFUDA, and further divide them into finer subcategories based on different learning strategies they use. We also investigate the challenges of methods in each subcategory, discuss the advantages/disadvantages of white-box and black-box SFUDA methods, conclude the commonly used benchmark datasets, and summarize the popular techniques for improved generalizability of models learned without using source data. We finally discuss several promising future directions in this field.
translated by 谷歌翻译
Most existing text-video retrieval methods focus on cross-modal matching between the visual content of offline videos and textual query sentences. However, in real scenarios, online videos are frequently accompanied by relevant text information such as titles, tags, and even subtitles, which can be utilized to match textual queries. This inspires us to generate associated captions from offline videos to help with existing text-video retrieval methods. To do so, we propose to use the zero-shot video captioner with knowledge of pre-trained web-scale models (e.g., CLIP and GPT-2) to generate captions for offline videos without any training. Given the captions, one question naturally arises: what can auxiliary captions do for text-video retrieval? In this paper, we present a novel framework Cap4Video, which makes use of captions from three aspects: i) Input data: The video and captions can form new video-caption pairs as data augmentation for training. ii) Feature interaction: We perform feature interaction between video and caption to yield enhanced video representations. iii) Output score: The Query-Caption matching branch can be complementary to the original Query-Video matching branch for text-video retrieval. We conduct thorough ablation studies to demonstrate the effectiveness of our method. Without any post-processing, our Cap4Video achieves state-of-the-art performance on MSR-VTT (51.4%), VATEX (66.6%), MSVD (51.8%), and DiDeMo (52.0%).
translated by 谷歌翻译