深度可以为显着对象检测(SOD)提供有用的地理线索,并已证明对最近的RGB-D SOD方法有所帮助。但是,现有的视频显着对象检测(VSOD)方法仅利用时空信息,很少利用深度信息进行检测。在本文中,我们提出了一个深度合并的三峰网络,称为VSOD的DCTNet,这是一项开创性的工作,旨在合并深度信息以帮助VSOD。为此,我们首先从RGB框架中生成深度,然后提出一种方法来不平等地处理这三种方式。具体而言,多模式注意模块(MAM)设计为对主模态(RGB)和两个辅助模态(深度,光流)之间的多模式远程依赖性建模。我们还引入了一个细化融合模块(RFM),以抑制每种模式中的噪音,并动态选择有用的信息以进行进一步的优化。最后,在精制特征以实现最终的跨模式融合后采用了渐进式融合策略。五个基准数据集的实验证明了我们的深度合并模型与12种最先进方法的优越性,并且还验证了深度的必要性。
translated by 谷歌翻译
3D场景由大量背景点主导,这对于主要需要集中在前景对象的检测任务是多余的。在本文中,我们分析了现有的稀疏3D CNN的主要组成部分,发现3D CNN忽略了数据的冗余,并在下降过程中进一步扩大了数据,这带来了大量的多余和不必要的计算间开销。受到这一点的启发,我们提出了一个名为“空间修剪稀疏卷积”(SPS-CONV)的新型卷积操作员,其中包括两个变体,空间修剪的Submanifold稀疏卷积(SPSS-CONV)和空间修剪的常规稀疏卷积(SPRS-CONV),包括这是基于动态确定冗余降低关键领域的想法。我们验证该幅度可以作为确定摆脱基于学习方法的额外计算的关键领域的重要提示。提出的模块可以轻松地将其纳入现有的稀疏3D CNN中,而无需额外的架构修改。关于Kitti,Waymo和Nuscenes数据集的广泛实验表明,我们的方法可以在不损害性能的情况下实现超过50%的GFLOPS。
translated by 谷歌翻译
2D CNN和视觉变压器(VIT)的最新进展表明,大型内核对于足够的接受场和高性能至关重要。受这些文献的启发,我们研究了3D大型设计的可行性和挑战。我们证明,在3D CNN中应用大型卷积内核在性能和效率方面都有更多困难。在2D CNN中运行良好的现有技术在3D网络中无效,包括流行的深度卷积。为了克服这些障碍,我们介绍了空间团体卷积及其大内核模块(SW-LK块)。它避免了幼稚3D大核的优化和效率问题。我们的大型内核3D CNN网络,即grounkernel3d,对各种3D任务(包括语义分割和对象检测)产生了非平凡的改进。值得注意的是,它在ScannETV2语义细分和72.8%的NDS NUSCENES对象检测基准上获得了73.9%的MIOU,在Nuscenes Lidar Leadar排行榜上排名第一。具有简单的多模式融合,将其进一步提高到74.2%NDS。与其CNN和Transformer对应物相比,bamekernel3d获得了可比或优越的结果。我们第一次表明,大型内核是可行的,对于3D网络至关重要。
translated by 谷歌翻译
在这项研究中,我们提出了一个新的3D对象检测器,具有可信赖的深度估计,称为bevdepth,用于基于摄像机的鸟类视图(BEV)3D对象检测。通过对最近方法的彻底分析,我们发现没有摄像头信息的深度估计是隐式学习的,这使其成为创建以下伪点云的事实伪造深度。使用编码的内在和外在参数,BevDepth获得了明确的深度监督。进一步引入了深度校正子网络,以抵消深度地面真理中的投影引起的干扰。为了减少速度瓶颈,同时使用估计的深度将功能从图像视图投影到BEV中,还提出了快速的视频转换操作。此外,我们的bevdepth可以通过多帧的输入轻松扩展。 Bevdepth没有任何铃铛和哨子,可以在具有挑战性的Nuscenes测试套装上实现新的最新60.0%NDS,同时保持高效率。相机和激光雷达之间的性能差距首次在10%NDS之内大大降低。
translated by 谷歌翻译
In this chapter, we review and discuss the transformation of AI technology in HCI/UX work and assess how AI technology will change how we do the work. We first discuss how AI can be used to enhance the result of user research and design evaluation. We then discuss how AI technology can be used to enhance HCI/UX design. Finally, we discuss how AI-enabled capabilities can improve UX when users interact with computing systems, applications, and services.
translated by 谷歌翻译
An increasing number of public datasets have shown a marked clinical impact on assessing anatomical structures. However, each of the datasets is small, partially labeled, and rarely investigates severe tumor subjects. Moreover, current models are limited to segmenting specific organs/tumors, which can not be extended to novel domains and classes. To tackle these limitations, we introduce embedding learned from Contrastive Language-Image Pre-training (CLIP) to segmentation models, dubbed the CLIP-Driven Universal Model. The Universal Model can better segment 25 organs and 6 types of tumors by exploiting the semantic relationship between abdominal structures. The model is developed from an assembly of 14 datasets with 3,410 CT scans and evaluated on 6,162 external CT scans from 3 datasets. We rank first on the public leaderboard of the Medical Segmentation Decathlon (MSD) and achieve the state-of-the-art results on Beyond The Cranial Vault (BTCV). Compared with dataset-specific models, the Universal Model is computationally more efficient (6x faster), generalizes better to CT scans from varying sites, and shows stronger transfer learning performance on novel tasks. The design of CLIP embedding enables the Universal Model to be easily extended to new classes without catastrophically forgetting the previously learned classes.
translated by 谷歌翻译
Recent advances in self-supervised learning (SSL) in computer vision are primarily comparative, whose goal is to preserve invariant and discriminative semantics in latent representations by comparing siamese image views. However, the preserved high-level semantics do not contain enough local information, which is vital in medical image analysis (e.g., image-based diagnosis and tumor segmentation). To mitigate the locality problem of comparative SSL, we propose to incorporate the task of pixel restoration for explicitly encoding more pixel-level information into high-level semantics. We also address the preservation of scale information, a powerful tool in aiding image understanding but has not drawn much attention in SSL. The resulting framework can be formulated as a multi-task optimization problem on the feature pyramid. Specifically, we conduct multi-scale pixel restoration and siamese feature comparison in the pyramid. In addition, we propose non-skip U-Net to build the feature pyramid and develop sub-crop to replace multi-crop in 3D medical imaging. The proposed unified SSL framework (PCRLv2) surpasses its self-supervised counterparts on various tasks, including brain tumor segmentation (BraTS 2018), chest pathology identification (ChestX-ray, CheXpert), pulmonary nodule detection (LUNA), and abdominal organ segmentation (LiTS), sometimes outperforming them by large margins with limited annotations.
translated by 谷歌翻译
We present Muse, a text-to-image Transformer model that achieves state-of-the-art image generation performance while being significantly more efficient than diffusion or autoregressive models. Muse is trained on a masked modeling task in discrete token space: given the text embedding extracted from a pre-trained large language model (LLM), Muse is trained to predict randomly masked image tokens. Compared to pixel-space diffusion models, such as Imagen and DALL-E 2, Muse is significantly more efficient due to the use of discrete tokens and requiring fewer sampling iterations; compared to autoregressive models, such as Parti, Muse is more efficient due to the use of parallel decoding. The use of a pre-trained LLM enables fine-grained language understanding, translating to high-fidelity image generation and the understanding of visual concepts such as objects, their spatial relationships, pose, cardinality etc. Our 900M parameter model achieves a new SOTA on CC3M, with an FID score of 6.06. The Muse 3B parameter model achieves an FID of 7.88 on zero-shot COCO evaluation, along with a CLIP score of 0.32. Muse also directly enables a number of image editing applications without the need to fine-tune or invert the model: inpainting, outpainting, and mask-free editing. More results are available at https://muse-model.github.io
translated by 谷歌翻译
Feature selection helps reduce data acquisition costs in ML, but the standard approach is to train models with static feature subsets. Here, we consider the dynamic feature selection (DFS) problem where a model sequentially queries features based on the presently available information. DFS is often addressed with reinforcement learning (RL), but we explore a simpler approach of greedily selecting features based on their conditional mutual information. This method is theoretically appealing but requires oracle access to the data distribution, so we develop a learning approach based on amortized optimization. The proposed method is shown to recover the greedy policy when trained to optimality and outperforms numerous existing feature selection methods in our experiments, thus validating it as a simple but powerful approach for this problem.
translated by 谷歌翻译
Human parsing aims to partition humans in image or video into multiple pixel-level semantic parts. In the last decade, it has gained significantly increased interest in the computer vision community and has been utilized in a broad range of practical applications, from security monitoring, to social media, to visual special effects, just to name a few. Although deep learning-based human parsing solutions have made remarkable achievements, many important concepts, existing challenges, and potential research directions are still confusing. In this survey, we comprehensively review three core sub-tasks: single human parsing, multiple human parsing, and video human parsing, by introducing their respective task settings, background concepts, relevant problems and applications, representative literature, and datasets. We also present quantitative performance comparisons of the reviewed methods on benchmark datasets. Additionally, to promote sustainable development of the community, we put forward a transformer-based human parsing framework, providing a high-performance baseline for follow-up research through universal, concise, and extensible solutions. Finally, we point out a set of under-investigated open issues in this field and suggest new directions for future study. We also provide a regularly updated project page, to continuously track recent developments in this fast-advancing field: https://github.com/soeaver/awesome-human-parsing.
translated by 谷歌翻译