基于图像的虚拟试验提供将服装项目转移到给定人的照片上的容量,这通常是通过将物品翘曲到给定的人类姿势并将翘曲的物品调整给人。然而,由于前面的方法的现实综合性图像(例如,SECIES)的结果是不可逼真的,因为颈部导致颈部被误导和对服装风格的重大变化。为了解决这些挑战,我们提出了一种解决这种独特问题的新方法,称为Viton作物。与现有的最先进的虚拟试论模型相比,Viton作物在与随机作物增强集成时更加强大地合成图像。在实验中,我们证明Viton作物优于定性和定量的Viton-HD。
translated by 谷歌翻译
基于图像的虚拟试验旨在综合一个穿给定服装的人的图像。为了解决任务,现有的方法会经过衣物项目,以适合该人的身体并生成穿着该物品的人的分割图,然后再将物品与人融合。但是,当扭曲和分割生成阶段在没有信息交换的情况下单独运行时,扭曲的衣服和分割图之间的未对准发生了,从而导致最终图像中的工件。信息断开还会导致在身体部位遮住的衣服区域附近过度翘曲,所谓的像素 - 刺式伪像。为了解决这些问题,我们提出了一个新颖的尝试条件发生器,作为两个阶段的统一模块(即扭曲和分割生成阶段)。条件生成器中新提出的特征融合块实现了信息交换,并且条件生成器不会造成任何未对准或像素 - 平方形工件。我们还介绍了歧视者的拒绝,从而滤除了不正确的细分图预测并确保虚拟试验框架的性能。高分辨率数据集上的实验表明,我们的模型成功处理了未对准和遮挡,并显着优于基线。代码可从https://github.com/sangyun884/hr-viton获得。
translated by 谷歌翻译
Image Virtual try-on aims at replacing the cloth on a personal image with a garment image (in-shop clothes), which has attracted increasing attention from the multimedia and computer vision communities. Prior methods successfully preserve the character of clothing images, however, occlusion remains a pernicious effect for realistic virtual try-on. In this work, we first present a comprehensive analysis of the occlusions and categorize them into two aspects: i) Inherent-Occlusion: the ghost of the former cloth still exists in the try-on image; ii) Acquired-Occlusion: the target cloth warps to the unreasonable body part. Based on the in-depth analysis, we find that the occlusions can be simulated by a novel semantically-guided mixup module, which can generate semantic-specific occluded images that work together with the try-on images to facilitate training a de-occlusion try-on (DOC-VTON) framework. Specifically, DOC-VTON first conducts a sharpened semantic parsing on the try-on person. Aided by semantics guidance and pose prior, various complexities of texture are selectively blending with human parts in a copy-and-paste manner. Then, the Generative Module (GM) is utilized to take charge of synthesizing the final try-on image and learning to de-occlusion jointly. In comparison to the state-of-the-art methods, DOC-VTON achieves better perceptual quality by reducing occlusion effects.
translated by 谷歌翻译
Online clothing catalogs lack diversity in body shape and garment size. Brands commonly display their garments on models of one or two sizes, rarely including plus-size models. In this work, we propose a new method, SizeGAN, for generating images of garments on different-sized models. To change the garment and model size while maintaining a photorealistic image, we incorporate image alignment ideas from the medical imaging literature into the StyleGAN2-ADA architecture. Our method learns deformation fields at multiple resolutions and uses a spatial transformer to modify the garment and model size. We evaluate our approach along three dimensions: realism, garment faithfulness, and size. To our knowledge, SizeGAN is the first method to focus on this size under-representation problem for modeling clothing. We provide an analysis comparing SizeGAN to other plausible approaches and additionally provide the first clothing dataset with size labels. In a user study comparing SizeGAN and two recent virtual try-on methods, we show that our method ranks first in each dimension, and was vastly preferred for realism and garment faithfulness. In comparison to most previous work, which has focused on generating photorealistic images of garments, our work shows that it is possible to generate images that are both photorealistic and cover diverse garment sizes.
translated by 谷歌翻译
基于图像的虚拟试验努力将服装的外观转移到目标人的图像上。先前的工作主要集中在上身衣服(例如T恤,衬衫和上衣)上,并忽略了全身或低身物品。这种缺点来自一个主要因素:用于基于图像的虚拟试验的当前公开可用数据集并不解释此品种,从而限制了该领域的进度。为了解决这种缺陷,我们介绍着着装代码,其中包含多类服装的图像。着装代码比基于图像的虚拟试验的公共可用数据集大于3倍以上,并且具有前视图,全身参考模型的高分辨率配对图像(1024x768)。为了生成具有高视觉质量且细节丰富的高清尝试图像,我们建议学习细粒度的区分功能。具体而言,我们利用一种语义意识歧视器,该歧视器在像素级而不是图像级或贴片级上进行预测。广泛的实验评估表明,所提出的方法在视觉质量和定量结果方面超过了基线和最先进的竞争者。着装码数据集可在https://github.com/aimagelab/dress-code上公开获得。
translated by 谷歌翻译
基于图像的虚拟试图是由于其巨大的真实潜力,以人为本的图像生成最有希望的应用之一。然而,由于大多数预先接近店内服装到目标人物,他们需要对成对的训练数据集进行费力和限制性的结构,严重限制了它们的可扩展性。虽然最近的一些作品试图直接从一个人转移服装,但减轻了收集配对数据集的需要,它们的表现受缺乏配对(监督)信息影响。特别地,衣服的解开样式和空间信息成为一个挑战,通过需要辅助数据或广泛的在线优化程序来解决任何方法,从而仍抑制其可扩展性。实现A \ EMPH {可扩展}虚拟试样系统,可以以无监督的方式在源和目标人物之间传输任意服装,因此我们提出了一种纹理保留的端到端网络,该包装空间 - 适应甘(意大利面),促进了现实世界的未配对虚拟试验。具体而言,要解开每位服装的风格和空间信息,意大利面甘包括一个创新的补丁路由解剖模块,用于成功挡住衣服纹理和形状特性。由源人关键点引导,修补程序路由的解剖学模块首先将衣服脱发到标准化的贴片中,从而消除了衣服的固有空间信息,然后将归一化贴片重建到符合目标人员姿势的翘曲衣服。鉴于翘曲的衣服,Pasta-GaN进一步推出了一种新型空间适应性的残余块,指导发电机合成更现实的服装细节。
translated by 谷歌翻译
基于图像的虚拟试验是以人为中心的现实潜力,是以人为中心的图像生成的最有希望的应用之一。在这项工作中,我们迈出了一步,探索多功能的虚拟尝试解决方案,我们认为这应该具有三个主要属性,即,它们应支持无监督的培训,任意服装类别和可控的服装编辑。为此,我们提出了一个特征性的端到端网络,即用空间自适应的斑点适应性GAN ++(Pasta-gan ++),以实现用于高分辨率不合规的虚拟试验的多功能系统。具体而言,我们的意大利面++由一个创新的贴布贴片的拆卸模块组成,可以将完整的服装切换为归一化贴剂,该贴片能够保留服装样式信息,同时消除服装空间信息,从而减轻在未受监督训练期间过度适应的问题。此外,面食++引入了基于贴片的服装表示和一个贴片引导的解析合成块,使其可以处理任意服装类别并支持本地服装编辑。最后,为了获得具有逼真的纹理细节的尝试结果,面食gan ++结合了一种新型的空间自适应残留模块,以将粗翘曲的服装功能注入发电机。对我们新收集的未配对的虚拟试验(UPT)数据集进行了广泛的实验,证明了面食gan ++比现有SOTA的优越性及其可控服装编辑的能力。
translated by 谷歌翻译
Image-based virtual try-on techniques have shown great promise for enhancing the user-experience and improving customer satisfaction on fashion-oriented e-commerce platforms. However, existing techniques are currently still limited in the quality of the try-on results they are able to produce from input images of diverse characteristics. In this work, we propose a Context-Driven Virtual Try-On Network (C-VTON) that addresses these limitations and convincingly transfers selected clothing items to the target subjects even under challenging pose configurations and in the presence of self-occlusions. At the core of the C-VTON pipeline are: (i) a geometric matching procedure that efficiently aligns the target clothing with the pose of the person in the input images, and (ii) a powerful image generator that utilizes various types of contextual information when synthesizing the final try-on result. C-VTON is evaluated in rigorous experiments on the VITON and MPV datasets and in comparison to state-of-the-art techniques from the literature. Experimental results show that the proposed approach is able to produce photo-realistic and visually convincing results and significantly improves on the existing state-of-the-art.
translated by 谷歌翻译
虚拟试验旨在在店内服装和参考人员图像的情况下产生光真实的拟合结果。现有的方法通常建立多阶段框架来分别处理衣服翘曲和身体混合,或严重依赖基于中间解析器的标签,这些标签可能嘈杂甚至不准确。为了解决上述挑战,我们通过开发一种新型的变形注意流(DAFLOF)提出了一个单阶段的尝试框架,该框架将可变形的注意方案应用于多流量估计。仅将姿势关键点作为指导,分别为参考人员和服装图像估计了自我和跨跨性别的注意力流。通过对多个流场进行采样,通过注意机制同时提取并合并了来自不同语义区域的特征级和像素级信息。它使衣服翘曲和身体合成,同时以端到端的方式导致照片真实的结果。在两个尝试数据集上进行的广泛实验表明,我们提出的方法在定性和定量上都能达到最先进的性能。此外,其他两个图像编辑任务上的其他实验说明了我们用于多视图合成和图像动画方法的多功能性。
translated by 谷歌翻译
Previous virtual try-on methods usually focus on aligning a clothing item with a person, limiting their ability to exploit the complex pose, shape and skin color of the person, as well as the overall structure of the clothing, which is vital to photo-realistic virtual try-on. To address this potential weakness, we propose a fill in fabrics (FIFA) model, a self-supervised conditional generative adversarial network based framework comprised of a Fabricator and a unified virtual try-on pipeline with a Segmenter, Warper and Fuser. The Fabricator aims to reconstruct the clothing image when provided with a masked clothing as input, and learns the overall structure of the clothing by filling in fabrics. A virtual try-on pipeline is then trained by transferring the learned representations from the Fabricator to Warper in an effort to warp and refine the target clothing. We also propose to use a multi-scale structural constraint to enforce global context at multiple scales while warping the target clothing to better fit the pose and shape of the person. Extensive experiments demonstrate that our FIFA model achieves state-of-the-art results on the standard VITON dataset for virtual try-on of clothing items, and is shown to be effective at handling complex poses and retaining the texture and embroidery of the clothing.
translated by 谷歌翻译
在线经济学的发展引起了在产品衣服上发电模型的图像的需求,展示新衣服并促进销售。然而,昂贵的专有模型图像在这种情况下挑战现有的图像虚拟试验方法,因为大多数需要在相当多的模型图像上伴随着配对的衣服图像。在本文中,我们提出了一种廉价但可扩展的弱监管方法,称为深生成点投影(DGP)来解决此特定方案。躺在所提出的方法的核心中是模仿人类预测磨损效果的过程,这是一种基于生活经验的无人汶过高的想象,而不是从监督中学到的计算规则。在这里,使用佩带的样式甘捕获佩戴的实际经验。实验表明,将衣服和身体的粗略对准突出到样式卡空间上可以产生照片逼真的佩戴结果。实际上专有模型图像的实验证明了DGP在产生衣服模型图像时的最先进的监督方法的优越性。
translated by 谷歌翻译
基于深度学习的虚拟试用系统最近取得了一些令人鼓舞的进展,但仍然存在需要解决的几个重要挑战,例如尝试所有类型的任意衣服,从一个类别到另一个类别的衣服尝试 - 少数文物的结果。要处理这个问题,我们在本文中首先使用各种类型的衣服,\即顶部,底部和整个衣服收集新的数据集,每个人都有多个类别,具有模式,徽标和其他细节的服装特性丰富的信息。基于此数据集,我们提出了用于全型衣服的任意虚拟试验网络(Avton),这可以通过保存和交易目标衣服和参考人员的特性来综合实际的试验图像。我们的方法包括三个模块:1)四肢预测模块,其用于通过保留参考人物的特性来预测人体部位。这对于处理交叉类别的试验任务(例如长袖\(\ Leftrightarrow \)短袖或长裤(\ Leftrightarrow \)裙子,\等),\等)特别适合,其中暴露的手臂或腿部有皮肤可以合理地预测颜色和细节; 2)改进的几何匹配模块,该模块设计成根据目标人的几何形状的扭曲衣服。通过紧凑的径向功能(Wendland的\(\ PSI \) - 功能),我们改进了基于TPS的翘曲方法; 3)折衷融合模块,即扭转翘曲衣服和参考人员的特点。该模块是基于网络结构的微调对称性来使生成的试验图像看起来更加自然和现实。进行了广泛的模拟,与最先进的虚拟试用方法相比,我们的方法可以实现更好的性能。
translated by 谷歌翻译
Image and video synthesis has become a blooming topic in computer vision and machine learning communities along with the developments of deep generative models, due to its great academic and application value. Many researchers have been devoted to synthesizing high-fidelity human images as one of the most commonly seen object categories in daily lives, where a large number of studies are performed based on various deep generative models, task settings and applications. Thus, it is necessary to give a comprehensive overview on these variant methods on human image generation. In this paper, we divide human image generation techniques into three paradigms, i.e., data-driven methods, knowledge-guided methods and hybrid methods. For each route, the most representative models and the corresponding variants are presented, where the advantages and characteristics of different methods are summarized in terms of model architectures and input/output requirements. Besides, the main public human image datasets and evaluation metrics in the literature are also summarized. Furthermore, due to the wide application potentials, two typical downstream usages of synthesized human images are covered, i.e., data augmentation for person recognition tasks and virtual try-on for fashion customers. Finally, we discuss the challenges and potential directions of human image generation to shed light on future research.
translated by 谷歌翻译
由于其在虚拟化设置中为客户提供了现实,个性化的产品演示,虚拟试验系统潜入了很大的关注。在本文中,我们呈现PT-VTON,一种基于布料的新型姿势转移框架,可以使用任意姿势进行虚拟试验。PT-VTON可以应用于时尚行业的现有系统的最小修改,同时满足整体视觉时尚性和详细的面料外观要求。它使得能够在模型和用户图像之间传输有效的衣服,具有任意姿势和身体形状。我们实施PT-VTON的原型,并证明我们的系统在面对姿势的剧烈变化时,我们的系统可以通过保留详细的人和织物特征出现而匹配或超越许多其他方法。PT-VTON显示在基于机器的定量度量和定性结果的替代方法。
translated by 谷歌翻译
An outfit visualization method generates an image of a person wearing real garments from images of those garments. Current methods can produce images that look realistic and preserve garment identity, captured in details such as collar, cuffs, texture, hem, and sleeve length. However, no current method can both control how the garment is worn -- including tuck or untuck, opened or closed, high or low on the waist, etc.. -- and generate realistic images that accurately preserve the properties of the original garment. We describe an outfit visualization method that controls drape while preserving garment identity. Our system allows instance independent editing of garment drape, which means a user can construct an edit (e.g. tucking a shirt in a specific way) that can be applied to all shirts in a garment collection. Garment detail is preserved by relying on a warping procedure to place the garment on the body and a generator then supplies fine shading detail. To achieve instance independent control, we use control points with garment category-level semantics to guide the warp. The method produces state-of-the-art quality images, while allowing creative ways to style garments, including allowing tops to be tucked or untucked; jackets to be worn open or closed; skirts to be worn higher or lower on the waist; and so on. The method allows interactive control to correct errors in individual renderings too. Because the edits are instance independent, they can be applied to large pools of garments automatically and can be conditioned on garment metadata (e.g. all cropped jackets are worn closed or all bomber jackets are worn closed).
translated by 谷歌翻译
The vision community has explored numerous pose guided human editing methods due to their extensive practical applications. Most of these methods still use an image-to-image formulation in which a single image is given as input to produce an edited image as output. However, the problem is ill-defined in cases when the target pose is significantly different from the input pose. Existing methods then resort to in-painting or style transfer to handle occlusions and preserve content. In this paper, we explore the utilization of multiple views to minimize the issue of missing information and generate an accurate representation of the underlying human model. To fuse the knowledge from multiple viewpoints, we design a selector network that takes the pose keypoints and texture from images and generates an interpretable per-pixel selection map. After that, the encodings from a separate network (trained on a single image human reposing task) are merged in the latent space. This enables us to generate accurate, precise, and visually coherent images for different editing tasks. We show the application of our network on 2 newly proposed tasks - Multi-view human reposing, and Mix-and-match human image generation. Additionally, we study the limitations of single-view editing and scenarios in which multi-view provides a much better alternative.
translated by 谷歌翻译
发型转移是将源发型修改为目标的任务。尽管最近的发型转移模型可以反映发型的精致特征,但它们仍然有两个主要局限性。首先,当源和目标图像具有不同的姿势(例如,查看方向或面部尺寸)时,现有方法无法转移发型,这在现实世界中很普遍。同样,当源图像中有非平凡的区域被其原始头发遮住时,先前的模型会产生不切实际的图像。当将长发修改为短发时,肩膀或背景被长发遮住了。为了解决这些问题,我们为姿势不变的发型转移,发型提出了一个新颖的框架。我们的模型包括两个阶段:1)基于流动的头发对齐和2)头发合成。在头发对齐阶段,我们利用基于关键点的光流估计器将目标发型与源姿势对齐。然后,我们基于语义区域感知的嵌入面膜(SIM)估计器在头发合成阶段生成最终的发型转移图像。我们的SIM估计器将源图像中的封闭区域划分为不同的语义区域,以反映其在涂料过程中的独特特征。为了证明我们的模型的有效性,我们使用多视图数据集(K-Hairstyle和Voxceleb)进行定量和定性评估。结果表明,发型通过在不同姿势的图像之间成功地转移发型来实现最先进的表现,而这是以前从未实现的。
translated by 谷歌翻译
对于基于MR物理学的模拟,对虚拟心脏MR图像的数据库进行了极大的兴趣,以开发深度学习分析网络。但是,这种数据库的使用受到限制或由于现实差距,缺失纹理以及模拟图像的简化外观而显示出次优性能。在这项工作中,我们1)在虚拟XCAT主题上提供不同的解剖学模拟,以及2)提出SIM2Real翻译网络以改善图像现实主义。我们的可用性实验表明,SIM2REAL数据具有增强训练数据并提高分割算法的性能的良好潜力。
translated by 谷歌翻译
人物图像的旨在在源图像上执行非刚性变形,这通常需要未对准数据对进行培训。最近,自我监督的方法通过合并自我重建的解除印章表达来表达这项任务的巨大前景。然而,这些方法未能利用解除戒断功能之间的空间相关性。在本文中,我们提出了一种自我监督的相关挖掘网络(SCM-NET)来重新排列特征空间中的源图像,其中两种协作模块是集成的,分解的样式编码器(DSE)和相关挖掘模块(CMM)。具体地,DSE首先在特征级别创建未对齐的对。然后,CMM建立用于特征重新排列的空间相关领域。最终,翻译模块将重新排列的功能转换为逼真的结果。同时,为了提高跨尺度姿态变换的保真度,我们提出了一种基于曲线图的体结构保持损失(BSR损耗),以保持半体上的合理的身体结构到全身。与Deepfashion DataSet进行的广泛实验表明了与其他监督和无监督和无监督的方法相比的方法的优势。此外,对面部的令人满意的结果显示了我们在其他变形任务中的方法的多功能性。
translated by 谷歌翻译
可控的人图像合成任务可以通过对身体姿势和外观的明确控制来实现广泛的应用。在本文中,我们提出了一个基于跨注意的样式分布模块,该模块在源语义样式和目标姿势转移的目标姿势之间计算。该模块故意选择每个语义表示的样式,并根据目标姿势分配它们。交叉注意的注意力矩阵表达了目标姿势与所有语义的源样式之间的动态相似性。因此,可以利用它来从源图像路由颜色和纹理,并受到目标解析图的进一步限制,以实现更清晰的目标。同时,为了准确编码源外观,还添加了不同语义样式之间的自我注意力。我们的模型的有效性在姿势转移和虚拟的尝试任务上进行了定量和质量验证。
translated by 谷歌翻译