Arbitrary style transfer (AST) transfers arbitrary artistic styles onto content images. Despite the recent rapid progress, existing AST methods are either incapable or too slow to run at ultra-resolutions (e.g., 4K) with limited resources, which heavily hinders their further applications. In this paper, we tackle this dilemma by learning a straightforward and lightweight model, dubbed MicroAST. The key insight is to completely abandon the use of cumbersome pre-trained Deep Convolutional Neural Networks (e.g., VGG) at inference. Instead, we design two micro encoders (content and style encoders) and one micro decoder for style transfer. The content encoder aims at extracting the main structure of the content image. The style encoder, coupled with a modulator, encodes the style image into learnable dual-modulation signals that modulate both intermediate features and convolutional filters of the decoder, thus injecting more sophisticated and flexible style signals to guide the stylizations. In addition, to boost the ability of the style encoder to extract more distinct and representative style signals, we also introduce a new style signal contrastive loss in our model. Compared to the state of the art, our MicroAST not only produces visually superior results but also is 5-73 times smaller and 6-18 times faster, for the first time enabling super-fast (about 0.5 seconds) AST at 4K ultra-resolutions. Code is available at https://github.com/EndyWon/MicroAST.
translated by 谷歌翻译
我们提出了一个极其简单的超分辨率样式转移框架,称为URST,以灵活地处理任意的高分辨率图像(例如,10000x10000像素)第一次转移。由于在处理超高分辨率图像时,由于巨大的内存成本和小行程大小,大多数现有最先进的方法将降低。 URST完全避免了由超高分辨率图像引起的内存问题(1)将图像划分为小块和(2)与新颖的缩略图实例归一化(TIN)执行修补程序样式传输。具体而言,TIN可以提取缩略图功能的归一化统计信息,并将它们应用于小补丁,确保不同补丁之间的风格一致性。总的来说,与现有技术相比,URST框架有三个优点。 (1)我们将输入图像分为小补丁并采用锡,成功传输图像样式,具有任意的高分辨率。 (2)实验表明,我们的URST超越了现有的SOTA方法对超高分辨率图像,从提高行程大小的提出的中风感知损失的有效性中受益。 (3)我们的URST可以轻松插入大多数现有的样式转移方法,即使在没有培训的情况下也直接提高他们的性能。代码可在https://git.io/urst上获得。
translated by 谷歌翻译
在本文中,我们介绍了纹理改革器,一个快速和通用的神经基础框架,用于使用用户指定的指导进行交互式纹理传输。挑战在三个方面:1)任务的多样性,2)引导图的简单性,以及3)执行效率。为了解决这些挑战,我们的主要思想是使用由i)全球视图结构对准阶段,ii)局部视图纹理细化阶段和III)的新的前馈多视图和多级合成程序。效果增强阶段用相干结构合成高质量结果,并以粗略的方式进行细纹细节。此外,我们还介绍了一种新颖的无学习视图特定的纹理改革(VSTR)操作,具有新的语义地图指导策略,以实现更准确的语义引导和结构保存的纹理传输。关于各种应用场景的实验结果展示了我们框架的有效性和优越性。并与最先进的交互式纹理转移算法相比,它不仅可以实现更高的质量结果,而且更加显着,也是更快的2-5个数量级。代码可在https://github.com/endywon/texture --reformer中找到。
translated by 谷歌翻译
最近的研究表明,通用风格转移的成功取得了巨大的成功,将任意视觉样式转移到内容图像中。但是,现有的方法遭受了审美的非现实主义问题,该问题引入了不和谐的模式和明显的人工制品,从而使结果很容易从真实的绘画中发现。为了解决这一限制,我们提出了一种新颖的美学增强风格转移方法,可以在美学上为任意风格产生更现实和令人愉悦的结果。具体而言,我们的方法引入了一种审美歧视者,以从大量的艺术家创造的绘画中学习通用的人类自愿美学特征。然后,合并了美学特征,以通过新颖的美学感知样式(AESSA)模块来增强样式转移过程。这样的AESSA模块使我们的Aesust能够根据样式图像的全局美学通道分布和内容图像的局部语义空间分布有效而灵活地集成样式模式。此外,我们还开发了一种新的两阶段转移培训策略,并通过两种审美正规化来更有效地训练我们的模型,从而进一步改善风格化的性能。广泛的实验和用户研究表明,我们的方法比艺术的状态综合了美学上更加和谐和现实的结果,从而大大缩小了真正的艺术家创造的绘画的差异。我们的代码可在https://github.com/endywon/aesust上找到。
translated by 谷歌翻译
Gatys et al. recently introduced a neural algorithm that renders a content image in the style of another image, achieving so-called style transfer. However, their framework requires a slow iterative optimization process, which limits its practical application. Fast approximations with feed-forward neural networks have been proposed to speed up neural style transfer. Unfortunately, the speed improvement comes at a cost: the network is usually tied to a fixed set of styles and cannot adapt to arbitrary new styles. In this paper, we present a simple yet effective approach that for the first time enables arbitrary style transfer in real-time. At the heart of our method is a novel adaptive instance normalization (AdaIN) layer that aligns the mean and variance of the content features with those of the style features. Our method achieves speed comparable to the fastest existing approach, without the restriction to a pre-defined set of styles. In addition, our approach allows flexible user controls such as content-style trade-off, style interpolation, color & spatial controls, all using a single feed-forward neural network.
translated by 谷歌翻译
Photo-realistic style transfer aims at migrating the artistic style from an exemplar style image to a content image, producing a result image without spatial distortions or unrealistic artifacts. Impressive results have been achieved by recent deep models. However, deep neural network based methods are too expensive to run in real-time. Meanwhile, bilateral grid based methods are much faster but still contain artifacts like overexposure. In this work, we propose the \textbf{Adaptive ColorMLP (AdaCM)}, an effective and efficient framework for universal photo-realistic style transfer. First, we find the complex non-linear color mapping between input and target domain can be efficiently modeled by a small multi-layer perceptron (ColorMLP) model. Then, in \textbf{AdaCM}, we adopt a CNN encoder to adaptively predict all parameters for the ColorMLP conditioned on each input content and style image pair. Experimental results demonstrate that AdaCM can generate vivid and high-quality stylization results. Meanwhile, our AdaCM is ultrafast and can process a 4K resolution image in 6ms on one V100 GPU.
translated by 谷歌翻译
STYLE TRANSED引起了大量的关注,因为它可以在保留图像结构的同时将给定图像更改为一个壮观的艺术风格。然而,常规方法容易丢失图像细节,并且在风格转移期间倾向于产生令人不快的伪影。在本文中,为了解决这些问题,提出了一种具有目标特征调色板的新颖艺术程式化方法,可以准确地传递关键特征。具体而言,我们的方法包含两个模块,即特征调色板组成(FPC)和注意着色(AC)模块。 FPC模块基于K-means群集捕获代表特征,并生成特征目标调色板。以下AC模块计算内容和样式图像之间的注意力映射,并根据注意力映射和目标调色板传输颜色和模式。这些模块使提出的程式化能够专注于关键功能并生成合理的传输图像。因此,所提出的方法的贡献是提出一种新的深度学习的样式转移方法和当前目标特征调色板和注意着色模块,并通过详尽的消融研究提供对所提出的方法的深入分析和洞察。定性和定量结果表明,我们的程式化图像具有最先进的性能,具有保护核心结构和内容图像的细节。
translated by 谷歌翻译
Photorealistic style transfer aims to transfer the artistic style of an image onto an input image or video while keeping photorealism. In this paper, we think it's the summary statistics matching scheme in existing algorithms that leads to unrealistic stylization. To avoid employing the popular Gram loss, we propose a self-supervised style transfer framework, which contains a style removal part and a style restoration part. The style removal network removes the original image styles, and the style restoration network recovers image styles in a supervised manner. Meanwhile, to address the problems in current feature transformation methods, we propose decoupled instance normalization to decompose feature transformation into style whitening and restylization. It works quite well in ColoristaNet and can transfer image styles efficiently while keeping photorealism. To ensure temporal coherency, we also incorporate optical flow methods and ConvLSTM to embed contextual information. Experiments demonstrates that ColoristaNet can achieve better stylization effects when compared with state-of-the-art algorithms.
translated by 谷歌翻译
在本文中,我们旨在设计一种能够共同执行艺术,照片现实和视频风格转移的通用风格的转移方法,而无需在培训期间看到视频。以前的单帧方法对整个图像进行了强大的限制,以维持时间一致性,在许多情况下可能会违反。取而代之的是,我们做出了一个温和而合理的假设,即全球不一致是由局部不一致所支配的,并设计了应用于本地斑块的一般对比度连贯性损失(CCPL)。 CCPL可以在样式传输过程中保留内容源的连贯性,而不会降低样式化。此外,它拥有一种邻居调节机制,从而大大减少了局部扭曲和大量视觉质量的改善。除了其在多功能风格转移方面的出色性能外,它还可以轻松地扩展到其他任务,例如图像到图像翻译。此外,为了更好地融合内容和样式功能,我们提出了简单的协方差转换(SCT),以有效地将内容功能的二阶统计数据与样式功能保持一致。实验证明了使用CCPL武装时,所得模型对于多功能风格转移的有效性。
translated by 谷歌翻译
最近求解深卷积神经网络(CNNS)内的光致风格转移的技术通常需要大规模数据集的密集训练,从而具有有限的适用性和揭示图像或风格的普遍性能力差。为了克服这一点,我们提出了一种新颖的框架,称为深度翻译(DTP),通过对给定输入图像对的测试时间训练来实现光致风格转移,与未经培训的网络一起学习特定于图像对的翻译,从而更好地产生性能和泛化。为风格转移进行此类测试时间培训量身定制,我们提出了新颖的网络架构,具有两个对应和生成模块的子模块,以及由对比含量,样式和循环一致性损耗组成的损耗功能。我们的框架不需要离线培训阶段进行风格转移,这是现有方法中的主要挑战之一,但网络将在测试期间仅了解。实验结果证明我们的框架具有更好的概念图像对的概括能力,甚至优于最先进的方法。
translated by 谷歌翻译
任意样式转移生成了艺术图像,该图像仅使用一个训练有素的网络结合了内容图像的结构和艺术风格的结合。此方法中使用的图像表示包含内容结构表示和样式模式表示形式,这通常是预训练的分类网络中高级表示的特征表示。但是,传统的分类网络是为分类而设计的,该分类通常集中在高级功能上并忽略其他功能。结果,风格化的图像在整个图像中均匀地分布了样式元素,并使整体图像结构无法识别。为了解决这个问题,我们通过结合全球和局部损失,引入了一种新型的任意风格转移方法,并通过结构增强。局部结构细节由LapStyle表示,全局结构由图像深度控制。实验结果表明,与其他最新方法相比,我们的方法可以在几个常见数据集中生成具有令人印象深刻的视觉效果的更高质量图像。
translated by 谷歌翻译
现有的神经样式传输方法需要参考样式图像来将样式图像的纹理信息传输到内容图像。然而,在许多实际情况中,用户可能没有参考样式图像,但仍然有兴趣通过想象它们来传输样式。为了处理此类应用程序,我们提出了一个新的框架,它可以实现样式转移`没有'风格图像,但仅使用所需风格的文本描述。使用预先训练的文本图像嵌入模型的剪辑,我们仅通过单个文本条件展示了内容图像样式的调制。具体而言,我们提出了一种针对现实纹理传输的多视图增强的修补程序文本图像匹配丢失。广泛的实验结果证实了具有反映语义查询文本的现实纹理的成功图像风格转移。
translated by 谷歌翻译
本文的目标是对面部素描合成(FSS)问题进行全面的研究。然而,由于获得了手绘草图数据集的高成本,因此缺乏完整的基准,用于评估过去十年的FSS算法的开发。因此,我们首先向FSS引入高质量的数据集,名为FS2K,其中包括2,104个图像素描对,跨越三种类型的草图样式,图像背景,照明条件,肤色和面部属性。 FS2K与以前的FSS数据集不同于难度,多样性和可扩展性,因此应促进FSS研究的进展。其次,我们通过调查139种古典方法,包括34个手工特征的面部素描合成方法,37个一般的神经式传输方法,43个深映像到图像翻译方法,以及35个图像 - 素描方法。此外,我们详细说明了现有的19个尖端模型的综合实验。第三,我们为FSS提供了一个简单的基准,名为FSGAN。只有两个直截了当的组件,即面部感知屏蔽和风格矢量扩展,FSGAN将超越所提出的FS2K数据集的所有先前最先进模型的性能,通过大边距。最后,我们在过去几年中汲取的经验教训,并指出了几个未解决的挑战。我们的开源代码可在https://github.com/dengpingfan/fsgan中获得。
translated by 谷歌翻译
As a powerful representation of 3D scenes, the neural radiance field (NeRF) enables high-quality novel view synthesis from multi-view images. Stylizing NeRF, however, remains challenging, especially on simulating a text-guided style with both the appearance and the geometry altered simultaneously. In this paper, we present NeRF-Art, a text-guided NeRF stylization approach that manipulates the style of a pre-trained NeRF model with a simple text prompt. Unlike previous approaches that either lack sufficient geometry deformations and texture details or require meshes to guide the stylization, our method can shift a 3D scene to the target style characterized by desired geometry and appearance variations without any mesh guidance. This is achieved by introducing a novel global-local contrastive learning strategy, combined with the directional constraint to simultaneously control both the trajectory and the strength of the target style. Moreover, we adopt a weight regularization method to effectively suppress cloudy artifacts and geometry noises which arise easily when the density field is transformed during geometry stylization. Through extensive experiments on various styles, we demonstrate that our method is effective and robust regarding both single-view stylization quality and cross-view consistency. The code and more results can be found in our project page: https://cassiepython.github.io/nerfart/.
translated by 谷歌翻译
Image harmonization aims to produce visually harmonious composite images by adjusting the foreground appearance to be compatible with the background. When the composite image has photographic foreground and painterly background, the task is called painterly image harmonization. There are only few works on this task, which are either time-consuming or weak in generating well-harmonized results. In this work, we propose a novel painterly harmonization network consisting of a dual-domain generator and a dual-domain discriminator, which harmonizes the composite image in both spatial domain and frequency domain. The dual-domain generator performs harmonization by using AdaIn modules in the spatial domain and our proposed ResFFT modules in the frequency domain. The dual-domain discriminator attempts to distinguish the inharmonious patches based on the spatial feature and frequency feature of each patch, which can enhance the ability of generator in an adversarial manner. Extensive experiments on the benchmark dataset show the effectiveness of our method. Our code and model are available at https://github.com/bcmi/PHDNet-Painterly-Image-Harmonization.
translated by 谷歌翻译
In recent years, arbitrary image style transfer has attracted more and more attention. Given a pair of content and style images, a stylized one is hoped that retains the content from the former while catching style patterns from the latter. However, it is difficult to simultaneously keep well the trade-off between the content details and the style features. To stylize the image with sufficient style patterns, the content details may be damaged and sometimes the objects of images can not be distinguished clearly. For this reason, we present a new transformer-based method named STT for image style transfer and an edge loss which can enhance the content details apparently to avoid generating blurred results for excessive rendering on style features. Qualitative and quantitative experiments demonstrate that STT achieves comparable performance to state-of-the-art image style transfer methods while alleviating the content leak problem.
translated by 谷歌翻译
基于图像的艺术渲染可以使用算法图像过滤合成各种表达式。与基于深度学习的方法相反,这些基于启发式的过滤技术可以在高分辨率图像上运行,可以解释,并且可以根据各个设计方面进行参数化。但是,适应或扩展这些技术生产新样式通常是一项乏味且容易出错的任务,需要专家知识。我们提出了一个新的范式来减轻此问题:实现算法图像过滤技术作为可区分的操作,可以学习与某些参考样式一致的参数化。为此,我们提出了明智的,这是一个基于示例的图像处理系统,可以在公共框架内处理多种风格化技术,例如水彩,油或卡通风格。通过训练全局和本地滤波器参数化的参数预测网络,我们可以同时适应参考样式和图像内容,例如增强面部特征。我们的方法可以在样式转移框架中进行优化,也可以在用于图像到图像翻译的生成对流设置中学习。我们证明,共同训练XDOG滤波器和用于后处理的CNN可以与基于GAN的最新方法获得可比的结果。
translated by 谷歌翻译
在偏置数据集上培训的分类模型通常在分发外部的外部样本上表现不佳,因为偏置的表示嵌入到模型中。最近,已经提出了各种脱叠方法来解除偏见的表示,但仅丢弃偏见的特征是具有挑战性的,而不会改变其他相关信息。在本文中,我们提出了一种新的扩展方法,该方法使用不同标记图像的纹理表示明确地生成附加图像来放大训练数据集,并在训练分类器时减轻偏差效果。每个新的生成图像包含来自源图像的类似内容信息,同时从具有不同标签的目标图像传送纹理。我们的模型包括纹理共发生损耗,该损耗确定生成的图像的纹理是否与目标的纹理类似,以及确定所生成和源图像之间的内容细节是否保留的内容细节的空间自相似性丢失。生成和原始训练图像都进一步用于训练能够改善抗偏置表示的鲁棒性的分类器。我们使用具有已知偏差的五个不同的人工设计数据集来展示我们的方法缓解偏差信息的能力。对于所有情况,我们的方法表现优于现有的现有最先进的方法。代码可用:https://github.com/myeongkyunkang/i2i4debias
translated by 谷歌翻译
Neural style transfer is a deep learning technique that produces an unprecedentedly rich style transfer from a style image to a content image and is particularly impressive when it comes to transferring style from a painting to an image. It was originally achieved by solving an optimization problem to match the global style statistics of the style image while preserving the local geometric features of the content image. The two main drawbacks of this original approach is that it is computationally expensive and that the resolution of the output images is limited by high GPU memory requirements. Many solutions have been proposed to both accelerate neural style transfer and increase its resolution, but they all compromise the quality of the produced images. Indeed, transferring the style of a painting is a complex task involving features at different scales, from the color palette and compositional style to the fine brushstrokes and texture of the canvas. This paper provides a solution to solve the original global optimization for ultra-high resolution images, enabling multiscale style transfer at unprecedented image sizes. This is achieved by spatially localizing the computation of each forward and backward passes through the VGG network. Extensive qualitative and quantitative comparisons show that our method produces a style transfer of unmatched quality for such high resolution painting styles.
translated by 谷歌翻译
Arbitrary Style Transfer is a technique used to produce a new image from two images: a content image, and a style image. The newly produced image is unseen and is generated from the algorithm itself. Balancing the structure and style components has been the major challenge that other state-of-the-art algorithms have tried to solve. Despite all the efforts, it's still a major challenge to apply the artistic style that was originally created on top of the structure of the content image while maintaining consistency. In this work, we solved these problems by using a Deep Learning approach using Convolutional Neural Networks. Our implementation will first extract foreground from the background using the pre-trained Detectron 2 model from the content image, and then apply the Arbitrary Style Transfer technique that is used in SANet. Once we have the two styled images, we will stitch the two chunks of images after the process of style transfer for the complete end piece.
translated by 谷歌翻译