最近,已经开发了各种视觉变压器作为对远程依赖性建模的能力。在当前的基于变压器的主骨用于医疗图像分割的骨架中,卷积层被纯变压器替换,或者将变压器添加到最深的编码器中以学习全球环境。但是,从规模的角度来看,主要有两个挑战:(1)尺度内问题:在每个尺度中提取局部全球线索所缺乏的现有方法,这可能会影响小物体的信号传播; (2)尺度间问题:现有方法未能从多个量表中探索独特的信息,这可能会阻碍表示尺寸,形状和位置广泛的对象的表示形式学习。为了解决这些局限性,我们提出了一个新颖的骨干,即比例尺形式,具有两个吸引人的设计:(1)尺度上的尺度内变压器旨在将基于CNN的本地功能与每个尺度中的基于变压器的全球线索相结合,在行和列的全局依赖项上可以通过轻巧的双轴MSA提取。 (2)一种简单有效的空间感知尺度变压器旨在以多个尺度之间的共识区域相互作用,该区域可以突出跨尺度依赖性并解决复杂量表的变化。对不同基准测试的实验结果表明,我们的尺度形式的表现优于当前最新方法。该代码可公开可用:https://github.com/zjugivelab/scaleformer。
translated by 谷歌翻译
我们提出了一种新的基于网格的学习方法(N-Cloth),适用于合理的3D布变形预测。我们的方法是通用的,可以处理具有任意拓扑的三角网格表示的布料或障碍物。我们使用Graph卷积将布料和对象网格转换为潜在空间以减少网格空间中的非线性。我们的网络可以基于初始布网格模板和目标障碍物网的状态来预测目标3D布网格变形。我们的方法可以处理复杂的布料网格,最高可达100美元的k三角形和场景,具有与SMPL人,非SMPL人或刚体相对应的各种对象。在实践中,我们的方法展示了连续输入框架之间的良好时间相干性,并且可用于在NVIDIA GeForce RTX 3090 GPU上以30-45美元的$ 30-45 $ FPS产生合理的布料模拟。我们突出了以前基于学习的方法和基于物理的布料模拟器的好处。
translated by 谷歌翻译
3D可线模型(3DMMS)是面部形状和外观的生成模型。然而,传统3DMMS的形状参数满足多变量高斯分布,而嵌入式嵌入满足过边距分布,并且这种冲突使得面部重建模型同时保持忠诚度和形状一致性的挑战。为了解决这个问题,我们提出了一种用于单眼脸部重建的新型3DMM的球体面部模型(SFM),这可以保持既有忠诚度和身份一致性。我们的SFM的核心是可以用于重建3D面形状的基矩阵,并且通过采用在第一和第二阶段中使用3D和2D训练数据的两级训练方法来学习基本矩阵。为了解决分发不匹配,我们设计一种新的损失,使形状参数具有超球的潜在空间。广泛的实验表明,SFM具有高表示能力和形状参数空间的聚类性能。此外,它产生富翼面形状,并且形状在单眼性重建中的挑战条件下是一致的。
translated by 谷歌翻译
虽然U-Net在医学图像分割任务中取得了巨大的成功,但它缺乏明确模拟远程依赖性的能力。因此,视觉变压器最近被出现为替代分割结构,以便通过自我关注捕获远程相关性的先天能力(SA)。然而,变压器通常依赖于大规模的预训练并具有高的计算复杂性。此外,SA只能在单个样本内模拟自我亲和力,忽略整个数据集的潜在相关性。为了解决这些问题,我们提出了一种名为混合变压器模块(MTM)的新型变压器模块,用于同时和内部内部学习。 MTM首先通过我们设计精心设计的本地全球高斯加权自我关注(LGG-SA),有效地计算自我亲创。然后,它通过外部注意力(EA)挖掘数据样本之间的连接。通过使用MTM,我们构造一个名为混合变压器U-NET(MT-UNET)的U形模型,以进行准确的医学图像分割。我们在两个不同的公共数据集上测试我们的方法,实验结果表明,该方法达到了更好的性能,对其他最先进的方法进行了更好的性能。代码可在:https://github.com/dootmaan/mt-unet。
translated by 谷歌翻译
许多最近的作品通过基于参数模型聚集了相同的身份的形状参数并将不同人的形状参数聚集在一起(例如,3D可变模型(3DMMS))来重建独特的3D面形状。然而,尽管使用这些形状参数的面部识别任务中的高精度,但是从那些参数重建的面部形状的视觉辨别是不令人满意的。以下研究尚未回答以下研究问题:做差异的形状参数保证所代表的3D面形状的视觉歧视吗?本文分析了形状参数与重建形状几何之间的关系,提出了一种新颖的形状相同感知正则化(SIR)损耗的形状参数,旨在增加形状参数和形状几何域中的辨别性。此外,为了应对包含地标和身份注释的缺乏培训数据,我们提出了一种网络结构和相关的培训策略,以利用包含身份或地标标签的混合数据。我们将我们的方法与现有方法进行比较重建误差,视觉区分性和形状参数的面部识别准确性。实验结果表明,我们的方法优于最先进的方法。
translated by 谷歌翻译
As one of the most important psychic stress reactions, micro-expressions (MEs), are spontaneous and transient facial expressions that can reveal the genuine emotions of human beings. Thus, recognizing MEs (MER) automatically is becoming increasingly crucial in the field of affective computing, and provides essential technical support in lie detection, psychological analysis and other areas. However, the lack of abundant ME data seriously restricts the development of cutting-edge data-driven MER models. Despite the recent efforts of several spontaneous ME datasets to alleviate this problem, it is still a tiny amount of work. To solve the problem of ME data hunger, we construct a dynamic spontaneous ME dataset with the largest current ME data scale, called DFME (Dynamic Facial Micro-expressions), which includes 7,526 well-labeled ME videos induced by 671 participants and annotated by more than 20 annotators throughout three years. Afterwards, we adopt four classical spatiotemporal feature learning models on DFME to perform MER experiments to objectively verify the validity of DFME dataset. In addition, we explore different solutions to the class imbalance and key-frame sequence sampling problems in dynamic MER respectively on DFME, so as to provide a valuable reference for future research. The comprehensive experimental results show that our DFME dataset can facilitate the research of automatic MER, and provide a new benchmark for MER. DFME will be published via https://mea-lab-421.github.io.
translated by 谷歌翻译
Panoptic Part Segmentation (PPS) unifies panoptic segmentation and part segmentation into one task. Previous works utilize separated approaches to handle thing, stuff, and part predictions without shared computation and task association. We aim to unify these tasks at the architectural level, designing the first end-to-end unified framework named Panoptic-PartFormer. Moreover, we find the previous metric PartPQ biases to PQ. To handle both issues, we make the following contributions: Firstly, we design a meta-architecture that decouples part feature and things/stuff feature, respectively. We model things, stuff, and parts as object queries and directly learn to optimize all three forms of prediction as a unified mask prediction and classification problem. We term our model as Panoptic-PartFormer. Secondly, we propose a new metric Part-Whole Quality (PWQ) to better measure such task from both pixel-region and part-whole perspectives. It can also decouple the error for part segmentation and panoptic segmentation. Thirdly, inspired by Mask2Former, based on our meta-architecture, we propose Panoptic-PartFormer++ and design a new part-whole cross attention scheme to further boost part segmentation qualities. We design a new part-whole interaction method using masked cross attention. Finally, the extensive ablation studies and analysis demonstrate the effectiveness of both Panoptic-PartFormer and Panoptic-PartFormer++. Compared with previous Panoptic-PartFormer, our Panoptic-PartFormer++ achieves 2% PartPQ and 3% PWQ improvements on the Cityscapes PPS dataset and 5% PartPQ on the Pascal Context PPS dataset. On both datasets, Panoptic-PartFormer++ achieves new state-of-the-art results with a significant cost drop of 70% on GFlops and 50% on parameters. Our models can serve as a strong baseline and aid future research in PPS. Code will be available.
translated by 谷歌翻译
In this work, we focus on instance-level open vocabulary segmentation, intending to expand a segmenter for instance-wise novel categories without mask annotations. We investigate a simple yet effective framework with the help of image captions, focusing on exploiting thousands of object nouns in captions to discover instances of novel classes. Rather than adopting pretrained caption models or using massive caption datasets with complex pipelines, we propose an end-to-end solution from two aspects: caption grounding and caption generation. In particular, we devise a joint Caption Grounding and Generation (CGG) framework based on a Mask Transformer baseline. The framework has a novel grounding loss that performs explicit and implicit multi-modal feature alignments. We further design a lightweight caption generation head to allow for additional caption supervision. We find that grounding and generation complement each other, significantly enhancing the segmentation performance for novel categories. We conduct extensive experiments on the COCO dataset with two settings: Open Vocabulary Instance Segmentation (OVIS) and Open Set Panoptic Segmentation (OSPS). The results demonstrate the superiority of our CGG framework over previous OVIS methods, achieving a large improvement of 6.8% mAP on novel classes without extra caption data. Our method also achieves over 15% PQ improvements for novel classes on the OSPS benchmark under various settings.
translated by 谷歌翻译
New architecture GPUs like A100 are now equipped with multi-instance GPU (MIG) technology, which allows the GPU to be partitioned into multiple small, isolated instances. This technology provides more flexibility for users to support both deep learning training and inference workloads, but efficiently utilizing it can still be challenging. The vision of this paper is to provide a more comprehensive and practical benchmark study for MIG in order to eliminate the need for tedious manual benchmarking and tuning efforts. To achieve this vision, the paper presents MIGPerf, an open-source tool that streamlines the benchmark study for MIG. Using MIGPerf, the authors conduct a series of experiments, including deep learning training and inference characterization on MIG, GPU sharing characterization, and framework compatibility with MIG. The results of these experiments provide new insights and guidance for users to effectively employ MIG, and lay the foundation for further research on the orchestration of hybrid training and inference workloads on MIGs. The code and results are released on https://github.com/MLSysOps/MIGProfiler. This work is still in progress and more results will be published soon.
translated by 谷歌翻译
We propose a novel approach to self-supervised learning of point cloud representations by differentiable neural rendering. Motivated by the fact that informative point cloud features should be able to encode rich geometry and appearance cues and render realistic images, we train a point-cloud encoder within a devised point-based neural renderer by comparing the rendered images with real images on massive RGB-D data. The learned point-cloud encoder can be easily integrated into various downstream tasks, including not only high-level tasks like 3D detection and segmentation, but low-level tasks like 3D reconstruction and image synthesis. Extensive experiments on various tasks demonstrate the superiority of our approach compared to existing pre-training methods.
translated by 谷歌翻译