Deep convolutional networks have become a popular tool for image generationand restoration. Generally, their excellent performance is imputed to theirability to learn realistic image priors from a large number of example images.In this paper, we show that, on the contrary, the structure of a generatornetwork is sufficient to capture a great deal of low-level image statisticsprior to any learning. In order to do so, we show that a randomly-initializedneural network can be used as a handcrafted prior with excellent results instandard inverse problems such as denoising, super-resolution, and inpainting.Furthermore, the same prior can be used to invert deep neural representationsto diagnose them, and to restore images based on flash-no flash input pairs. Apart from its diverse applications, our approach highlights the inductivebias captured by standard generator network architectures. It also bridges thegap between two very popular families of image restoration methods:learning-based methods using deep convolutional networks and learning-freemethods based on handcrafted image priors such as self-similarity. Code andsupplementary material are available athttps://dmitryulyanov.github.io/deep_image_prior .
translated by 谷歌翻译