我们提出了Diffustereo,这是一种仅使用稀疏相机(在这项工作中8)进行高质量3D人类重建的新型系统。其核心是一种新型基于扩散的立体声模块,该模块将扩散模型(一种强大的生成模型)引入迭代立体声匹配网络中。为此,我们设计了一个新的扩散内核和其他立体限制,以促进网络中的立体声匹配和深度估计。我们进一步提出了一个多级立体声网络体系结构,以处理高分辨率(最多4K)输入,而无需无法负担的内存足迹。考虑到人类的一组稀疏视图颜色图像,提出的基于多级扩散的立体声网络可以产生高准确的深度图,然后通过有效的多视图融合策略将其转换为高质量的3D人类模型。总体而言,我们的方法可以自动重建人类模型,其质量是高端密集摄像头钻机,这是使用更轻巧的硬件设置来实现的。实验表明,我们的方法在定性和定量上都优于最先进的方法。
translated by 谷歌翻译
我们提出了FITE,这是一种对服装中的人体化身进行建模的第一刻度框架。我们的框架首先学习了代表粗衣拓扑的隐式表面模板,然后采用模板来指导点集的产生,从而进一步捕获姿势依赖的服装变形,例如皱纹。我们的管道结合了隐式和明确表示的优点,即处理变化拓扑的能力以及有效捕获细节的能力。我们还提出了扩散的皮肤,以促进模板训练,尤其是用于宽松衣服的模板训练,以及基于投影的姿势编码,以从网格模板中提取姿势信息,而无需预定义的紫外线图或连接性。我们的代码可在https://github.com/jsnln/fite上公开获取。
translated by 谷歌翻译
为了解决由单眼人类体积捕获中部分观察结果引起的不足问题,我们提出了Avatarcap,这是一个新颖的框架,该框架将可动画的化身引入了可见和不可见区域中高保真重建的捕获管道中。我们的方法首先为该主题创建一个可动画化的化身,从少量(〜20)的3D扫描作为先验。然后给出了该主题的单眼RGB视频,我们的方法集成了图像观察和头像先验的信息,因此无论可见性如何,都会重新构建具有动态细节的高保真3D纹理模型。为了学习有效的头像,仅从少数样品中捕获体积捕获,我们提出了GeoteXavatar,该地理Xavatar利用几何和纹理监督以分解的隐式方式限制了姿势依赖性动力学。进一步提出了一种涉及规范正常融合和重建网络的头像条件的体积捕获方法,以在观察到的区域和无形区域中整合图像观测和化身动力学,以整合图像观测和头像动力学。总体而言,我们的方法可以通过详细的和姿势依赖性动力学实现单眼人体体积捕获,并且实验表明我们的方法优于最新的最新状态。代码可在https://github.com/lizhe00/avatarcap上找到。
translated by 谷歌翻译
我们提出了一种新型神经渲染管线,混合体积纹理渲染(HVTR),其合成了从任意姿势和高质量的任意姿势的虚拟人体化身。首先,我们学会在人体表面的致密UV歧管上编码铰接的人体运动。为了处理复杂的运动(例如,自闭电),我们将基于动态姿势的神经辐射场建造关于UV歧管的编码信息来构建基于动态姿态条件的神经辐射场的3D体积表示。虽然这允许我们表示具有更改拓扑的3D几何形状,但体积渲染是计算沉重的。因此,我们仅使用姿势调节的下采样的神经辐射场(PD-NERF)使用粗糙的体积表示,我们可以以低分辨率有效地呈现。此外,我们学习2D纹理功能,这些功能与图像空间中呈现的体积功能融合。我们的方法的关键优势是,我们可以通过快速GaN的纹理渲染器将融合功能转换为高分辨率,高质量的化身。我们证明混合渲染使HVTR能够处理复杂的动作,在用户控制的姿势/形状下呈现高质量的化身,甚至松散的衣服,最重要的是,在推理时间快速。我们的实验结果还证明了最先进的定量结果。
translated by 谷歌翻译
Masked image modeling (MIM) performs strongly in pre-training large vision Transformers (ViTs). However, small models that are critical for real-world applications cannot or only marginally benefit from this pre-training approach. In this paper, we explore distillation techniques to transfer the success of large MIM-based pre-trained models to smaller ones. We systematically study different options in the distillation framework, including distilling targets, losses, input, network regularization, sequential distillation, etc, revealing that: 1) Distilling token relations is more effective than CLS token- and feature-based distillation; 2) An intermediate layer of the teacher network as target perform better than that using the last layer when the depth of the student mismatches that of the teacher; 3) Weak regularization is preferred; etc. With these findings, we achieve significant fine-tuning accuracy improvements over the scratch MIM pre-training on ImageNet-1K classification, using all the ViT-Tiny, ViT-Small, and ViT-base models, with +4.2%/+2.4%/+1.4% gains, respectively. Our TinyMIM model of base size achieves 52.2 mIoU in AE20K semantic segmentation, which is +4.1 higher than the MAE baseline. Our TinyMIM model of tiny size achieves 79.6% top-1 accuracy on ImageNet-1K image classification, which sets a new record for small vision models of the same size and computation budget. This strong performance suggests an alternative way for developing small vision Transformer models, that is, by exploring better training methods rather than introducing inductive biases into architectures as in most previous works. Code is available at https://github.com/OliverRensu/TinyMIM.
translated by 谷歌翻译
Dataset distillation has emerged as a prominent technique to improve data efficiency when training machine learning models. It encapsulates the knowledge from a large dataset into a smaller synthetic dataset. A model trained on this smaller distilled dataset can attain comparable performance to a model trained on the original training dataset. However, the existing dataset distillation techniques mainly aim at achieving the best trade-off between resource usage efficiency and model utility. The security risks stemming from them have not been explored. This study performs the first backdoor attack against the models trained on the data distilled by dataset distillation models in the image domain. Concretely, we inject triggers into the synthetic data during the distillation procedure rather than during the model training stage, where all previous attacks are performed. We propose two types of backdoor attacks, namely NAIVEATTACK and DOORPING. NAIVEATTACK simply adds triggers to the raw data at the initial distillation phase, while DOORPING iteratively updates the triggers during the entire distillation procedure. We conduct extensive evaluations on multiple datasets, architectures, and dataset distillation techniques. Empirical evaluation shows that NAIVEATTACK achieves decent attack success rate (ASR) scores in some cases, while DOORPING reaches higher ASR scores (close to 1.0) in all cases. Furthermore, we conduct a comprehensive ablation study to analyze the factors that may affect the attack performance. Finally, we evaluate multiple defense mechanisms against our backdoor attacks and show that our attacks can practically circumvent these defense mechanisms.
translated by 谷歌翻译
Benefiting from the intrinsic supervision information exploitation capability, contrastive learning has achieved promising performance in the field of deep graph clustering recently. However, we observe that two drawbacks of the positive and negative sample construction mechanisms limit the performance of existing algorithms from further improvement. 1) The quality of positive samples heavily depends on the carefully designed data augmentations, while inappropriate data augmentations would easily lead to the semantic drift and indiscriminative positive samples. 2) The constructed negative samples are not reliable for ignoring important clustering information. To solve these problems, we propose a Cluster-guided Contrastive deep Graph Clustering network (CCGC) by mining the intrinsic supervision information in the high-confidence clustering results. Specifically, instead of conducting complex node or edge perturbation, we construct two views of the graph by designing special Siamese encoders whose weights are not shared between the sibling sub-networks. Then, guided by the high-confidence clustering information, we carefully select and construct the positive samples from the same high-confidence cluster in two views. Moreover, to construct semantic meaningful negative sample pairs, we regard the centers of different high-confidence clusters as negative samples, thus improving the discriminative capability and reliability of the constructed sample pairs. Lastly, we design an objective function to pull close the samples from the same cluster while pushing away those from other clusters by maximizing and minimizing the cross-view cosine similarity between positive and negative samples. Extensive experimental results on six datasets demonstrate the effectiveness of CCGC compared with the existing state-of-the-art algorithms.
translated by 谷歌翻译
As one of the prevalent methods to achieve automation systems, Imitation Learning (IL) presents a promising performance in a wide range of domains. However, despite the considerable improvement in policy performance, the corresponding research on the explainability of IL models is still limited. Inspired by the recent approaches in explainable artificial intelligence methods, we proposed a model-agnostic explaining framework for IL models called R2RISE. R2RISE aims to explain the overall policy performance with respect to the frames in demonstrations. It iteratively retrains the black-box IL model from the randomized masked demonstrations and uses the conventional evaluation outcome environment returns as the coefficient to build an importance map. We also conducted experiments to investigate three major questions concerning frames' importance equality, the effectiveness of the importance map, and connections between importance maps from different IL models. The result shows that R2RISE successfully distinguishes important frames from the demonstrations.
translated by 谷歌翻译
Compressed videos often exhibit visually annoying artifacts, known as Perceivable Encoding Artifacts (PEAs), which dramatically degrade video visual quality. Subjective and objective measures capable of identifying and quantifying various types of PEAs are critical in improving visual quality. In this paper, we investigate the influence of four spatial PEAs (i.e. blurring, blocking, bleeding, and ringing) and two temporal PEAs (i.e. flickering and floating) on video quality. For spatial artifacts, we propose a visual saliency model with a low computational cost and higher consistency with human visual perception. In terms of temporal artifacts, self-attention based TimeSFormer is improved to detect temporal artifacts. Based on the six types of PEAs, a quality metric called Saliency-Aware Spatio-Temporal Artifacts Measurement (SSTAM) is proposed. Experimental results demonstrate that the proposed method outperforms state-of-the-art metrics. We believe that SSTAM will be beneficial for optimizing video coding techniques.
translated by 谷歌翻译
Transformer has achieved impressive successes for various computer vision tasks. However, most of existing studies require to pretrain the Transformer backbone on a large-scale labeled dataset (e.g., ImageNet) for achieving satisfactory performance, which is usually unavailable for medical images. Additionally, due to the gap between medical and natural images, the improvement generated by the ImageNet pretrained weights significantly degrades while transferring the weights to medical image processing tasks. In this paper, we propose Bootstrap Own Latent of Transformer (BOLT), a self-supervised learning approach specifically for medical image classification with the Transformer backbone. Our BOLT consists of two networks, namely online and target branches, for self-supervised representation learning. Concretely, the online network is trained to predict the target network representation of the same patch embedding tokens with a different perturbation. To maximally excavate the impact of Transformer from limited medical data, we propose an auxiliary difficulty ranking task. The Transformer is enforced to identify which branch (i.e., online/target) is processing the more difficult perturbed tokens. Overall, the Transformer endeavours itself to distill the transformation-invariant features from the perturbed tokens to simultaneously achieve difficulty measurement and maintain the consistency of self-supervised representations. The proposed BOLT is evaluated on three medical image processing tasks, i.e., skin lesion classification, knee fatigue fracture grading and diabetic retinopathy grading. The experimental results validate the superiority of our BOLT for medical image classification, compared to ImageNet pretrained weights and state-of-the-art self-supervised learning approaches.
translated by 谷歌翻译