This paper presents a 3D generative model that uses diffusion models to automatically generate 3D digital avatars represented as neural radiance fields. A significant challenge in generating such avatars is that the memory and processing costs in 3D are prohibitive for producing the rich details required for high-quality avatars. To tackle this problem we propose the roll-out diffusion network (Rodin), which represents a neural radiance field as multiple 2D feature maps and rolls out these maps into a single 2D feature plane within which we perform 3D-aware diffusion. The Rodin model brings the much-needed computational efficiency while preserving the integrity of diffusion in 3D by using 3D-aware convolution that attends to projected features in the 2D feature plane according to their original relationship in 3D. We also use latent conditioning to orchestrate the feature generation for global coherence, leading to high-fidelity avatars and enabling their semantic editing based on text prompts. Finally, we use hierarchical synthesis to further enhance details. The 3D avatars generated by our model compare favorably with those produced by existing generative techniques. We can generate highly detailed avatars with realistic hairstyles and facial hair like beards. We also demonstrate 3D avatar generation from image or text as well as text-guided editability.
translated by 谷歌翻译
Online media data, in the forms of images and videos, are becoming mainstream communication channels. However, recent advances in deep learning, particularly deep generative models, open the doors for producing perceptually convincing images and videos at a low cost, which not only poses a serious threat to the trustworthiness of digital information but also has severe societal implications. This motivates a growing interest of research in media tampering detection, i.e., using deep learning techniques to examine whether media data have been maliciously manipulated. Depending on the content of the targeted images, media forgery could be divided into image tampering and Deepfake techniques. The former typically moves or erases the visual elements in ordinary images, while the latter manipulates the expressions and even the identity of human faces. Accordingly, the means of defense include image tampering detection and Deepfake detection, which share a wide variety of properties. In this paper, we provide a comprehensive review of the current media tampering detection approaches, and discuss the challenges and trends in this field for future research.
translated by 谷歌翻译
Deep transfer learning has been widely used for knowledge transmission in recent years. The standard approach of pre-training and subsequently fine-tuning, or linear probing, has shown itself to be effective in many down-stream tasks. Therefore, a challenging and ongoing question arises: how to quantify cross-task transferability that is compatible with transferred results while keeping self-consistency? Existing transferability metrics are estimated on the particular model by conversing source and target tasks. They must be recalculated with all existing source tasks whenever a novel unknown target task is encountered, which is extremely computationally expensive. In this work, we highlight what properties should be satisfied and evaluate existing metrics in light of these characteristics. Building upon this, we propose Principal Gradient Expectation (PGE), a simple yet effective method for assessing transferability across tasks. Specifically, we use a restart scheme to calculate every batch gradient over each weight unit more than once, and then we take the average of all the gradients to get the expectation. Thus, the transferability between the source and target task is estimated by computing the distance of normalized principal gradients. Extensive experiments show that the proposed transferability metric is more stable, reliable and efficient than SOTA methods.
translated by 谷歌翻译
Visual anomaly detection plays a crucial role in not only manufacturing inspection to find defects of products during manufacturing processes, but also maintenance inspection to keep equipment in optimum working condition particularly outdoors. Due to the scarcity of the defective samples, unsupervised anomaly detection has attracted great attention in recent years. However, existing datasets for unsupervised anomaly detection are biased towards manufacturing inspection, not considering maintenance inspection which is usually conducted under outdoor uncontrolled environment such as varying camera viewpoints, messy background and degradation of object surface after long-term working. We focus on outdoor maintenance inspection and contribute a comprehensive Maintenance Inspection Anomaly Detection (MIAD) dataset which contains more than 100K high-resolution color images in various outdoor industrial scenarios. This dataset is generated by a 3D graphics software and covers both surface and logical anomalies with pixel-precise ground truth. Extensive evaluations of representative algorithms for unsupervised anomaly detection are conducted, and we expect MIAD and corresponding experimental results can inspire research community in outdoor unsupervised anomaly detection tasks. Worthwhile and related future work can be spawned from our new dataset.
translated by 谷歌翻译
Breast cancer is one of the common cancers that endanger the health of women globally. Accurate target lesion segmentation is essential for early clinical intervention and postoperative follow-up. Recently, many convolutional neural networks (CNNs) have been proposed to segment breast tumors from ultrasound images. However, the complex ultrasound pattern and the variable tumor shape and size bring challenges to the accurate segmentation of the breast lesion. Motivated by the selective kernel convolution, we introduce an enhanced selective kernel convolution for breast tumor segmentation, which integrates multiple feature map region representations and adaptively recalibrates the weights of these feature map regions from the channel and spatial dimensions. This region recalibration strategy enables the network to focus more on high-contributing region features and mitigate the perturbation of less useful regions. Finally, the enhanced selective kernel convolution is integrated into U-net with deep supervision constraints to adaptively capture the robust representation of breast tumors. Extensive experiments with twelve state-of-the-art deep learning segmentation methods on three public breast ultrasound datasets demonstrate that our method has a more competitive segmentation performance in breast ultrasound images.
translated by 谷歌翻译
Given sufficient training data on the source domain, cross-domain few-shot learning (CD-FSL) aims at recognizing new classes with a small number of labeled examples on the target domain. The key to addressing CD-FSL is to narrow the domain gap and transferring knowledge of a network trained on the source domain to the target domain. To help knowledge transfer, this paper introduces an intermediate domain generated by mixing images in the source and the target domain. Specifically, to generate the optimal intermediate domain for different target data, we propose a novel target guided dynamic mixup (TGDM) framework that leverages the target data to guide the generation of mixed images via dynamic mixup. The proposed TGDM framework contains a Mixup-3T network for learning classifiers and a dynamic ratio generation network (DRGN) for learning the optimal mix ratio. To better transfer the knowledge, the proposed Mixup-3T network contains three branches with shared parameters for classifying classes in the source domain, target domain, and intermediate domain. To generate the optimal intermediate domain, the DRGN learns to generate an optimal mix ratio according to the performance on auxiliary target data. Then, the whole TGDM framework is trained via bi-level meta-learning so that TGDM can rectify itself to achieve optimal performance on target data. Extensive experimental results on several benchmark datasets verify the effectiveness of our method.
translated by 谷歌翻译
尽管利用对抗性示例的可传递性可以达到非目标攻击的攻击成功率,但它在有针对性的攻击中不能很好地工作,因为从源图像到目标类别的梯度方向通常在不同的DNN中有所不同。为了提高目标攻击的可转移性,最近的研究使生成的对抗示例的特征与从辅助网络或生成对抗网络中学到的目标类别的特征分布保持一致。但是,这些作品假定培训数据集可用,并且需要大量时间来培训网络,这使得很难应用于现实世界。在本文中,我们从普遍性的角度重新审视具有针对性转移性的对抗性示例,并发现高度普遍的对抗扰动往往更容易转移。基于此观察结果,我们提出了图像(LI)攻击的局部性,以提高目标传递性。具体而言,Li不仅仅是使用分类损失,而是引入了对抗性扰动的原始图像和随机裁剪的图像之间的特征相似性损失,这使得对抗性扰动的特征比良性图像更为主导,因此提高了目标传递性的性能。通过将图像的局部性纳入优化扰动中,LI攻击强调,有针对性的扰动应与多样化的输入模式,甚至本地图像贴片有关。广泛的实验表明,LI可以实现基于转移的目标攻击的高成功率。在攻击Imagenet兼容数据集时,LI与现有最新方法相比,LI的提高为12 \%。
translated by 谷歌翻译
融合激光雷达和相机信息对于在自动驾驶系统中实现准确可靠的3D对象检测至关重要。但是,由于难以结合两个截然不同的方式的多晶格几何和语义特征,因此这是具有挑战性的。最近的方法旨在通过2D摄像机图像中的提升点(称为种子)中的3D空间来探索相机功能的语义密度,并且可以将它们大致分为1)1)原始点的早期融合,旨在增强3D在早期输入阶段的点云,以及2)Bev(鸟眼视图)的后期融合,在检测头之前合并了LiDar和Camera BEV功能。尽管两者在增强联合特征的表示能力方面都具有优点,但这种单级融合策略是对上述挑战的次优点。他们的主要缺点是无法充分从两种不同的方式中相互作用的多晶格语义特征。为此,我们提出了一个新颖的框架,该框架着重于多粒性激光雷达和相机功能的多尺度渐进互动。我们提出的方法缩写为MDMSFusion,实现最先进的方法可导致3D对象检测,在Nuscenes验证集上具有69.1 MAP和71.8 NDS,在NUSCENES测试集上进行了70.8 MAP和73.2 nds,该级别的第一和第二级和第二个NDS。在提交时,在单模型的非集结方法中。
translated by 谷歌翻译
零击学习是一种学习制度,通过概括从可见类中学到的视觉语义关系来识别看不见的课程。为了获得有效的ZSL模型,可以诉诸于来自多个来源的培训样本,这可能不可避免地提高了有关不同组织之间数据共享的隐私问题。在本文中,我们提出了一个新颖的联合零摄影学习FedZSL框架,该框架从位于边缘设备上的分散数据中学习了一个中心模型。为了更好地概括为以前看不见的类,FEDZSL允许从非重叠类采样的每个设备上的训练数据,这些数据远非I.I.D.传统的联邦学习通常假设。我们在FEDZSL协议中确定了两个关键挑战:1)受过训练的模型容易偏向于本地观察到的类,因此未能推广到其他设备上的看不见的类和/或所见类别; 2)由于培训数据中的每个类别都来自单个来源,因此中心模型非常容易受到模型置换(后门)攻击的影响。为了解决这些问题,我们提出了三个局部目标,以通过关系蒸馏来进行视觉声音对齐和跨设备对齐,这利用了归一化的类协方差,以使跨设备的预测逻辑的一致性正常。为了防止后门攻击,提出了一种功能级防御技术。由于恶意样本与给定的语义属性的相关性较小,因此将丢弃低大小的视觉特征以稳定模型更新。 FedZSL的有效性和鲁棒性通过在三个零击基准数据集上进行的广泛实验证明。
translated by 谷歌翻译
现实世界中的数据通常遵循长尾巴的分布,其中一些多数类别占据了大多数数据,而大多数少数族裔类别都包含有限数量的样本。分类模型最小化跨凝结的努力来代表和分类尾部类别。尽管已经对学习无偏分类器的学习问题进行了充分的研究,但代表不平衡数据的方法却没有探索。在本文中,我们专注于表示不平衡数据的表示。最近,受到监督的对比学习最近在平衡数据上表现出了有希望的表现。但是,通过我们的理论分析,我们发现对于长尾数据,它未能形成常规的单纯形,这是代表学习的理想几何配置。为了纠正SCL的优化行为并进一步改善了长尾视觉识别的性能,我们提出了平衡对比度学习(BCL)的新型损失。与SCL相比,我们在BCL:类平均水平方面有两个改进,可以平衡负类的梯度贡献。课堂组合,允许所有类都出现在每个迷你批次中。提出的平衡对比度学习(BCL)方法满足形成常规单纯形的条件并有助于跨透明拷贝的优化。配备了BCL,提出的两分支框架可以获得更强的特征表示,并在诸如CIFAR-10-LT,CIFAR-100-LT,Imagenet-LT和Inaturalist2018之类的长尾基准数据集上实现竞争性能。我们的代码可在\ href {https://github.com/flamiezhu/bcl} {this url}中获得。
translated by 谷歌翻译