尽管具有卷积神经网络(CNN)的图像超分辨率(SR)的突破性进步,但由于SR网络的计算复杂性很高,SR尚未享受无处不在的应用。量化是解决此问题的有前途方法之一。但是,现有的方法无法量化低于8位的位宽度的SR模型,由于固定的位宽度量化量的严重精度损失。在这项工作中,为了实现高平均比重减少,准确性损失较低,我们建议针对SR网络的新颖的内容感知动态量化(CADYQ)方法,该方法将最佳位置分配给本地区域和层,并根据输入的本地内容适应。图片。为此,引入了一个可训练的位选择器模块,以确定每一层和给定的本地图像补丁的适当位宽度和量化水平。该模块受量化灵敏度的控制,该量化通过使用贴片的图像梯度的平均幅度和层的输入特征的标准偏差来估计。拟议的量化管道已在各种SR网络上进行了测试,并对几个标准基准进行了广泛评估。计算复杂性和升高恢复精度的显着降低清楚地表明了SR提出的CADYQ框架的有效性。代码可从https://github.com/cheeun/cadyq获得。
translated by 谷歌翻译
近年来,通过开发大型的深层模型,图像修复任务已经见证了绩效的巨大提高。尽管表现出色,但深层模型要求的重量计算限制了图像恢复的应用。为了提高限制,需要减少网络的大小,同时保持准确性。最近,N:M结构化修剪似乎是使模型具有准确性约束的有效且实用的修剪方法之一。但是,它无法解释图像恢复网络不同层的不同计算复杂性和性能要求。为了进一步优化效率和恢复精度之间的权衡,我们提出了一种新型的修剪方法,该方法确定了每一层N:M结构稀疏性的修剪比。关于超分辨率和脱张任务的广泛实验结果证明了我们方法的功效,该方法的表现胜过以前的修剪方法。拟议方法的Pytorch实施将在https://github.com/junghunoh/sls_cvpr2r2022上公开获得。
translated by 谷歌翻译
自动编辑(APE)的数据建筑需要广泛而专家级别的人力努力,因为它包含一个涉及识别句子中的错误并提供合适的修订的精心级别。因此,我们开发了一个自我监督的数据生成工具,可作为Web应用程序部署,这最大限度地减少了人类监督,并从并行语料库构建了具有英语作为目标语言的多种语言对的个性化浏览数据。可以使用此工具进行数据为中心的猿类研究,涉及许多尚未研究的语言对,由于缺乏合适的数据而尚未研究。
translated by 谷歌翻译
Recent research on super-resolution has progressed with the development of deep convolutional neural networks (DCNN). In particular, residual learning techniques exhibit improved performance. In this paper, we develop an enhanced deep super-resolution network (EDSR) with performance exceeding those of current state-of-the-art SR methods. The significant performance improvement of our model is due to optimization by removing unnecessary modules in conventional residual networks. The performance is further improved by expanding the model size while we stabilize the training procedure. We also propose a new multi-scale deep super-resolution system (MDSR) and training method, which can reconstruct high-resolution images of different upscaling factors in a single model. The proposed methods show superior performance over the state-of-the-art methods on benchmark datasets and prove its excellence by winning the NTIRE2017 Super-Resolution Challenge [26].
translated by 谷歌翻译
Non-uniform blind deblurring for general dynamic scenes is a challenging computer vision problem as blurs arise not only from multiple object motions but also from camera shake, scene depth variation. To remove these complicated motion blurs, conventional energy optimization based methods rely on simple assumptions such that blur kernel is partially uniform or locally linear. Moreover, recent machine learning based methods also depend on synthetic blur datasets generated under these assumptions. This makes conventional deblurring methods fail to remove blurs where blur kernel is difficult to approximate or parameterize (e.g. object motion boundaries). In this work, we propose a multi-scale convolutional neural network that restores sharp images in an end-to-end manner where blur is caused by various sources. Together, we present multiscale loss function that mimics conventional coarse-to-fine approaches. Furthermore, we propose a new large-scale dataset that provides pairs of realistic blurry image and the corresponding ground truth sharp image that are obtained by a high-speed camera. With the proposed model trained on this dataset, we demonstrate empirically that our method achieves the state-of-the-art performance in dynamic scene deblurring not only qualitatively, but also quantitatively.
translated by 谷歌翻译
Large-scale diffusion-based generative models have led to breakthroughs in text-conditioned high-resolution image synthesis. Starting from random noise, such text-to-image diffusion models gradually synthesize images in an iterative fashion while conditioning on text prompts. We find that their synthesis behavior qualitatively changes throughout this process: Early in sampling, generation strongly relies on the text prompt to generate text-aligned content, while later, the text conditioning is almost entirely ignored. This suggests that sharing model parameters throughout the entire generation process may not be ideal. Therefore, in contrast to existing works, we propose to train an ensemble of text-to-image diffusion models specialized for different synthesis stages. To maintain training efficiency, we initially train a single model, which is then split into specialized models that are trained for the specific stages of the iterative generation process. Our ensemble of diffusion models, called eDiff-I, results in improved text alignment while maintaining the same inference computation cost and preserving high visual quality, outperforming previous large-scale text-to-image diffusion models on the standard benchmark. In addition, we train our model to exploit a variety of embeddings for conditioning, including the T5 text, CLIP text, and CLIP image embeddings. We show that these different embeddings lead to different behaviors. Notably, the CLIP image embedding allows an intuitive way of transferring the style of a reference image to the target text-to-image output. Lastly, we show a technique that enables eDiff-I's "paint-with-words" capability. A user can select the word in the input text and paint it in a canvas to control the output, which is very handy for crafting the desired image in mind. The project page is available at https://deepimagination.cc/eDiff-I/
translated by 谷歌翻译
最近,对现实世界图像的操纵以及生成对抗网络(GAN)和相应的编码器的开发已被高度详细阐述,它们将真实世界图像嵌入到潜在空间中。但是,由于失真和感知之间的权衡,GAN的设计编码器仍然是一项具有挑战性的任务。在本文中,我们指出,现有的编码器不仅试图降低兴趣区域的失真,例如人的面部区域,而且在不感兴趣的地区,例如背景模式和障碍。但是,实际图像中的大多数不感兴趣区域都位于分布式(OOD)上,这是不可行的,可以理想地通过生成模型重建。此外,我们从经验上发现,与兴趣区域重叠的不感兴趣的区域可以构成兴趣区域的原始特征,例如,一个与面部区域重叠的麦克风被倒入白胡子中。结果,在保持感知质量的同时降低整个图像的失真非常具有挑战性。为了克服这一权衡,我们提出了一个简单而有效的编码器培训计划,即创造了兴趣码,该计划通过关注兴趣区域来促进编码。 Resityle引导编码器解开兴趣和不感兴趣区域的编码。为此,我们过滤了不感兴趣的区域的信息,以调节不感兴趣的区域的负面影响。我们证明,与现有的最新编码器相比,Resiveyle可以达到较低的失真和更高的感知质量。尤其是我们的模型可以坚固地保守原始图像的特征,该图像显示了强大的图像编辑和样式混合结果。审查后,我们将使用预先培训的模型发布代码。
translated by 谷歌翻译
现实世界知识图(kg)主要是不完整的。恢复缺失关系的问题(称为KG完成)最近已成为一个活跃的研究领域。知识图(kg)嵌入是实体和关系的低维表示,是kg完成的关键技术。诸如凸,SACN,Interacte和RGCN等模型中的卷积神经网络取得了最新成功。本文采用了不同的建筑视图,并提出了使用密集的神经网络结合关系感知和共同特征的Comdense。在关系感知的特征提取中,我们尝试通过应用特定于每个关系的编码函数来创建关系归纳偏置。在公共特征提取中,我们将共同的编码函数应用于所有输入嵌入。这些编码功能是使用密集的密集层实现的。与先前的基线方法相比,Comdense在MRR方面实现了链接预测中的最新性能,在FB15K-237上达到@1,并在WN18RR上达到@1。我们进行了一项广泛的消融研究,以检查关系感知层和comdense的共同层的影响。实验结果表明,在Comdense中实现的合并密集体系结构实现了最佳性能。
translated by 谷歌翻译
The 3D-aware image synthesis focuses on conserving spatial consistency besides generating high-resolution images with fine details. Recently, Neural Radiance Field (NeRF) has been introduced for synthesizing novel views with low computational cost and superior performance. While several works investigate a generative NeRF and show remarkable achievement, they cannot handle conditional and continuous feature manipulation in the generation procedure. In this work, we introduce a novel model, called Class-Continuous Conditional Generative NeRF ($\text{C}^{3}$G-NeRF), which can synthesize conditionally manipulated photorealistic 3D-consistent images by projecting conditional features to the generator and the discriminator. The proposed $\text{C}^{3}$G-NeRF is evaluated with three image datasets, AFHQ, CelebA, and Cars. As a result, our model shows strong 3D-consistency with fine details and smooth interpolation in conditional feature manipulation. For instance, $\text{C}^{3}$G-NeRF exhibits a Fr\'echet Inception Distance (FID) of 7.64 in 3D-aware face image synthesis with a $\text{128}^{2}$ resolution. Additionally, we provide FIDs of generated 3D-aware images of each class of the datasets as it is possible to synthesize class-conditional images with $\text{C}^{3}$G-NeRF.
translated by 谷歌翻译
Cellular automata (CA) captivate researchers due to teh emergent, complex individualized behavior that simple global rules of interaction enact. Recent advances in the field have combined CA with convolutional neural networks to achieve self-regenerating images. This new branch of CA is called neural cellular automata [1]. The goal of this project is to use the idea of idea of neural cellular automata to grow prediction machines. We place many different convolutional neural networks in a grid. Each conv net cell outputs a prediction of what the next state will be, and minimizes predictive error. Cells received their neighbors' colors and fitnesses as input. Each cell's fitness score described how accurate its predictions were. Cells could also move to explore their environment and some stochasticity was applied to movement.
translated by 谷歌翻译