融合激光雷达和相机信息对于在自动驾驶系统中实现准确可靠的3D对象检测至关重要。但是,由于难以结合两个截然不同的方式的多晶格几何和语义特征,因此这是具有挑战性的。最近的方法旨在通过2D摄像机图像中的提升点(称为种子)中的3D空间来探索相机功能的语义密度,并且可以将它们大致分为1)1)原始点的早期融合,旨在增强3D在早期输入阶段的点云,以及2)Bev(鸟眼视图)的后期融合,在检测头之前合并了LiDar和Camera BEV功能。尽管两者在增强联合特征的表示能力方面都具有优点,但这种单级融合策略是对上述挑战的次优点。他们的主要缺点是无法充分从两种不同的方式中相互作用的多晶格语义特征。为此,我们提出了一个新颖的框架,该框架着重于多粒性激光雷达和相机功能的多尺度渐进互动。我们提出的方法缩写为MDMSFusion,实现最先进的方法可导致3D对象检测,在Nuscenes验证集上具有69.1 MAP和71.8 NDS,在NUSCENES测试集上进行了70.8 MAP和73.2 nds,该级别的第一和第二级和第二个NDS。在提交时,在单模型的非集结方法中。
translated by 谷歌翻译
在此技术报告中,我们将提交介绍给Waymo 3D检测排行榜。我们的网络基于CenterPoint体系结构,但有重大改进。我们设计了一个2D主干,以利用多尺度功能,以更好地检测具有各种尺寸的对象,以及最佳的基于运输的目标分配策略,该策略将更丰富的监督信号动态地分配给了候选者。我们还采用测试时间扩展和模型集结以进行进一步的改进。我们的提交目前在Waymo 3D检测排行榜上以78.45 MAPH排名第四。
translated by 谷歌翻译
3D密集字幕是最近提供的新型任务,其中点云包含比2D对应物更多的几何信息。但是,由于点云中包含的更高复杂性和更广泛的对象关系,它也更具挑战性。现有方法仅将这种关系视为图表中对象特征学习的副产品,而无需特别编码它们,从而导致了亚最佳结果。在本文中,旨在通过捕获和利用3D场景中的复杂关系来改善3D密集的字幕,我们提出了更多的多阶关系挖掘模型,以支持产生更多的描述性和全面标题。从技术上讲,我们更多地以渐进的方式编码对象关系,因为可以从有限数量的基本关系中推论复杂的关系。我们首先设计了一种新型的空间布局图卷积(SLGC),该图形将几个一阶关系编码为在3D对象建议上构造的图的边缘。接下来,从结果图中,我们进一步提取多个三重态,这些三重态将基本的一阶关系封装为基本单元,并构造几个以对象为中心的三重态注意图(OTAG),以推断每个目标对象的多阶关系。将OTAG的更新的节点功能聚合并输入标题解码器,以提供丰富的关系提示,因此可以生成包括与上下文对象的不同关系的字幕。 SCAN2CAP数据集的广泛实验证明了我们提出的更多及其组件的有效性,并且我们也表现优于当前最新方法。我们的代码可从https://github.com/sxjyjay/more获得。
translated by 谷歌翻译
给定文本描述,时间语言接地(TLG)旨在本地化包含未经监控视频中指定语义的段的时间边界。 TLG本质上是一个具有挑战性的任务,因为它需要全面了解句子语义和视频内容。以前的作品可以在完全监督的设置中解决此任务,需要大量的时间注释或在通常无法实现令人满意的性能的弱监管设置中。由于手动注释是昂贵的,以应对有限的注释,我们通过纳入自我监督的学习以半监督方式解决TLG,并提出自我监督的半监督时间语言接地(S ^ 4TLG)。 S ^ 4TLG由两部分组成:(1)基于来自教师模型的预测,自适应为未标记的样本进行自适应生产即时伪标签的伪标签生成模块; (2)具有模态和模态对比度损耗的自我监督特征学习模块,以在视频内容一致性和视频文本对齐的约束下学习视频特征表示。我们对ActivityNet-CD-OOD和Charades-CD-OOD数据集进行了广泛的实验。结果表明,与完全监督的最新方法相比,我们所提出的S ^ 4TLG可以实现竞争性能,同时只需要一小部分时间注释。
translated by 谷歌翻译
Deep learning models can achieve high accuracy when trained on large amounts of labeled data. However, real-world scenarios often involve several challenges: Training data may become available in installments, may originate from multiple different domains, and may not contain labels for training. Certain settings, for instance medical applications, often involve further restrictions that prohibit retention of previously seen data due to privacy regulations. In this work, to address such challenges, we study unsupervised segmentation in continual learning scenarios that involve domain shift. To that end, we introduce GarDA (Generative Appearance Replay for continual Domain Adaptation), a generative-replay based approach that can adapt a segmentation model sequentially to new domains with unlabeled data. In contrast to single-step unsupervised domain adaptation (UDA), continual adaptation to a sequence of domains enables leveraging and consolidation of information from multiple domains. Unlike previous approaches in incremental UDA, our method does not require access to previously seen data, making it applicable in many practical scenarios. We evaluate GarDA on two datasets with different organs and modalities, where it substantially outperforms existing techniques.
translated by 谷歌翻译
The development of social media user stance detection and bot detection methods rely heavily on large-scale and high-quality benchmarks. However, in addition to low annotation quality, existing benchmarks generally have incomplete user relationships, suppressing graph-based account detection research. To address these issues, we propose a Multi-Relational Graph-Based Twitter Account Detection Benchmark (MGTAB), the first standardized graph-based benchmark for account detection. To our knowledge, MGTAB was built based on the largest original data in the field, with over 1.55 million users and 130 million tweets. MGTAB contains 10,199 expert-annotated users and 7 types of relationships, ensuring high-quality annotation and diversified relations. In MGTAB, we extracted the 20 user property features with the greatest information gain and user tweet features as the user features. In addition, we performed a thorough evaluation of MGTAB and other public datasets. Our experiments found that graph-based approaches are generally more effective than feature-based approaches and perform better when introducing multiple relations. By analyzing experiment results, we identify effective approaches for account detection and provide potential future research directions in this field. Our benchmark and standardized evaluation procedures are freely available at: https://github.com/GraphDetec/MGTAB.
translated by 谷歌翻译
As one of the prevalent methods to achieve automation systems, Imitation Learning (IL) presents a promising performance in a wide range of domains. However, despite the considerable improvement in policy performance, the corresponding research on the explainability of IL models is still limited. Inspired by the recent approaches in explainable artificial intelligence methods, we proposed a model-agnostic explaining framework for IL models called R2RISE. R2RISE aims to explain the overall policy performance with respect to the frames in demonstrations. It iteratively retrains the black-box IL model from the randomized masked demonstrations and uses the conventional evaluation outcome environment returns as the coefficient to build an importance map. We also conducted experiments to investigate three major questions concerning frames' importance equality, the effectiveness of the importance map, and connections between importance maps from different IL models. The result shows that R2RISE successfully distinguishes important frames from the demonstrations.
translated by 谷歌翻译
Compressed videos often exhibit visually annoying artifacts, known as Perceivable Encoding Artifacts (PEAs), which dramatically degrade video visual quality. Subjective and objective measures capable of identifying and quantifying various types of PEAs are critical in improving visual quality. In this paper, we investigate the influence of four spatial PEAs (i.e. blurring, blocking, bleeding, and ringing) and two temporal PEAs (i.e. flickering and floating) on video quality. For spatial artifacts, we propose a visual saliency model with a low computational cost and higher consistency with human visual perception. In terms of temporal artifacts, self-attention based TimeSFormer is improved to detect temporal artifacts. Based on the six types of PEAs, a quality metric called Saliency-Aware Spatio-Temporal Artifacts Measurement (SSTAM) is proposed. Experimental results demonstrate that the proposed method outperforms state-of-the-art metrics. We believe that SSTAM will be beneficial for optimizing video coding techniques.
translated by 谷歌翻译
We propose a distributionally robust return-risk model for Markov decision processes (MDPs) under risk and reward ambiguity. The proposed model optimizes the weighted average of mean and percentile performances, and it covers the distributionally robust MDPs and the distributionally robust chance-constrained MDPs (both under reward ambiguity) as special cases. By considering that the unknown reward distribution lies in a Wasserstein ambiguity set, we derive the tractable reformulation for our model. In particular, we show that that the return-risk model can also account for risk from uncertain transition kernel when one only seeks deterministic policies, and that a distributionally robust MDP under the percentile criterion can be reformulated as its nominal counterpart at an adjusted risk level. A scalable first-order algorithm is designed to solve large-scale problems, and we demonstrate the advantages of our proposed model and algorithm through numerical experiments.
translated by 谷歌翻译
Witnessing the impressive achievements of pre-training techniques on large-scale data in the field of computer vision and natural language processing, we wonder whether this idea could be adapted in a grab-and-go spirit, and mitigate the sample inefficiency problem for visuomotor driving. Given the highly dynamic and variant nature of the input, the visuomotor driving task inherently lacks view and translation invariance, and the visual input contains massive irrelevant information for decision making, resulting in predominant pre-training approaches from general vision less suitable for the autonomous driving task. To this end, we propose PPGeo (Policy Pre-training via Geometric modeling), an intuitive and straightforward fully self-supervised framework curated for the policy pretraining in visuomotor driving. We aim at learning policy representations as a powerful abstraction by modeling 3D geometric scenes on large-scale unlabeled and uncalibrated YouTube driving videos. The proposed PPGeo is performed in two stages to support effective self-supervised training. In the first stage, the geometric modeling framework generates pose and depth predictions simultaneously, with two consecutive frames as input. In the second stage, the visual encoder learns driving policy representation by predicting the future ego-motion and optimizing with the photometric error based on current visual observation only. As such, the pre-trained visual encoder is equipped with rich driving policy related representations and thereby competent for multiple visuomotor driving tasks. Extensive experiments covering a wide span of challenging scenarios have demonstrated the superiority of our proposed approach, where improvements range from 2% to even over 100% with very limited data. Code and models will be available at https://github.com/OpenDriveLab/PPGeo.
translated by 谷歌翻译