随着脑成像技术和机器学习工具的出现,很多努力都致力于构建计算模型来捕获人脑中的视觉信息的编码。最具挑战性的大脑解码任务之一是通过功能磁共振成像(FMRI)测量的脑活动的感知自然图像的精确重建。在这项工作中,我们调查了来自FMRI的自然图像重建的最新学习方法。我们在架构设计,基准数据集和评估指标方面检查这些方法,并在标准化评估指标上呈现公平的性能评估。最后,我们讨论了现有研究的优势和局限,并提出了潜在的未来方向。
translated by 谷歌翻译
Brain decoding is a field of computational neuroscience that uses measurable brain activity to infer mental states or internal representations of perceptual inputs. Therefore, we propose a novel approach to brain decoding that also relies on semantic and contextual similarity. We employ an fMRI dataset of natural image vision and create a deep learning decoding pipeline inspired by the existence of both bottom-up and top-down processes in human vision. We train a linear brain-to-feature model to map fMRI activity features to visual stimuli features, assuming that the brain projects visual information onto a space that is homeomorphic to the latent space represented by the last convolutional layer of a pretrained convolutional neural network, which typically collects a variety of semantic features that summarize and highlight similarities and differences between concepts. These features are then categorized in the latent space using a nearest-neighbor strategy, and the results are used to condition a generative latent diffusion model to create novel images. From fMRI data only, we produce reconstructions of visual stimuli that match the original content very well on a semantic level, surpassing the state of the art in previous literature. We evaluate our work and obtain good results using a quantitative semantic metric (the Wu-Palmer similarity metric over the WordNet lexicon, which had an average value of 0.57) and perform a human evaluation experiment that resulted in correct evaluation, according to the multiplicity of human criteria in evaluating image similarity, in over 80% of the test set.
translated by 谷歌翻译
本文的目标是对面部素描合成(FSS)问题进行全面的研究。然而,由于获得了手绘草图数据集的高成本,因此缺乏完整的基准,用于评估过去十年的FSS算法的开发。因此,我们首先向FSS引入高质量的数据集,名为FS2K,其中包括2,104个图像素描对,跨越三种类型的草图样式,图像背景,照明条件,肤色和面部属性。 FS2K与以前的FSS数据集不同于难度,多样性和可扩展性,因此应促进FSS研究的进展。其次,我们通过调查139种古典方法,包括34个手工特征的面部素描合成方法,37个一般的神经式传输方法,43个深映像到图像翻译方法,以及35个图像 - 素描方法。此外,我们详细说明了现有的19个尖端模型的综合实验。第三,我们为FSS提供了一个简单的基准,名为FSGAN。只有两个直截了当的组件,即面部感知屏蔽和风格矢量扩展,FSGAN将超越所提出的FS2K数据集的所有先前最先进模型的性能,通过大边距。最后,我们在过去几年中汲取的经验教训,并指出了几个未解决的挑战。我们的开源代码可在https://github.com/dengpingfan/fsgan中获得。
translated by 谷歌翻译
Image-generating machine learning models are typically trained with loss functions based on distance in the image space. This often leads to over-smoothed results. We propose a class of loss functions, which we call deep perceptual similarity metrics (DeePSiM), that mitigate this problem. Instead of computing distances in the image space, we compute distances between image features extracted by deep neural networks. This metric better reflects perceptually similarity of images and thus leads to better results. We show three applications: autoencoder training, a modification of a variational autoencoder, and inversion of deep convolutional networks. In all cases, the generated images look sharp and resemble natural images.
translated by 谷歌翻译
当前,借助监督学习方法,基于深度学习的视觉检查已取得了非常成功的成功。但是,在实际的工业场景中,缺陷样本的稀缺性,注释的成本以及缺乏缺陷的先验知识可能会使基于监督的方法无效。近年来,无监督的异常定位算法已在工业检查任务中广泛使用。本文旨在通过深入学习在工业图像中无视无视的异常定位中的最新成就来帮助该领域的研究人员。该调查回顾了120多个重要出版物,其中涵盖了异常定位的各个方面,主要涵盖了所审查方法的各种概念,挑战,分类法,基准数据集和定量性能比较。在审查迄今为止的成就时,本文提供了一些未来研究方向的详细预测和分析。这篇综述为对工业异常本地化感兴趣的研究人员提供了详细的技术信息,并希望将其应用于其他领域的异常本质。
translated by 谷歌翻译
Due to the low signal-to-noise ratio and limited resolution of functional MRI data, and the high complexity of natural images, reconstructing a visual stimulus from human brain fMRI measurements is a challenging task. In this work, we propose a novel approach for this task, which we call Cortex2Image, to decode visual stimuli with high semantic fidelity and rich fine-grained detail. In particular, we train a surface-based convolutional network model that maps from brain response to semantic image features first (Cortex2Semantic). We then combine this model with a high-quality image generator (Instance-Conditioned GAN) to train another mapping from brain response to fine-grained image features using a variational approach (Cortex2Detail). Image reconstructions obtained by our proposed method achieve state-of-the-art semantic fidelity, while yielding good fine-grained similarity with the ground-truth stimulus. Our code is available at: https://github.com/zijin-gu/meshconv-decoding.git.
translated by 谷歌翻译
深度神经网络在人类分析中已经普遍存在,增强了应用的性能,例如生物识别识别,动作识别以及人重新识别。但是,此类网络的性能通过可用的培训数据缩放。在人类分析中,对大规模数据集的需求构成了严重的挑战,因为数据收集乏味,廉价,昂贵,并且必须遵守数据保护法。当前的研究研究了\ textit {合成数据}的生成,作为在现场收集真实数据的有效且具有隐私性的替代方案。这项调查介绍了基本定义和方法,在生成和采用合成数据进行人类分析时必不可少。我们进行了一项调查,总结了当前的最新方法以及使用合成数据的主要好处。我们还提供了公开可用的合成数据集和生成模型的概述。最后,我们讨论了该领域的局限性以及开放研究问题。这项调查旨在为人类分析领域的研究人员和从业人员提供。
translated by 谷歌翻译
生成的对抗网络(GAN)是在众多领域成功使用的一种强大的深度学习模型。它们属于一个称为生成方法的更广泛的家族,该家族通过从真实示例中学习样本分布来生成新数据。在临床背景下,与传统的生成方法相比,GAN在捕获空间复杂,非线性和潜在微妙的疾病作用方面表现出增强的能力。这篇综述评估了有关gan在各种神经系统疾病的成像研究中的应用的现有文献,包括阿尔茨海默氏病,脑肿瘤,脑老化和多发性硬化症。我们为每个应用程序提供了各种GAN方法的直观解释,并进一步讨论了在神经影像学中利用gans的主要挑战,开放问题以及有希望的未来方向。我们旨在通过强调如何利用gan来支持临床决策,并有助于更好地理解脑部疾病的结构和功能模式,从而弥合先进的深度学习方法和神经病学研究之间的差距。
translated by 谷歌翻译
Generative adversarial networks (GANs) provide a way to learn deep representations without extensively annotated training data. They achieve this through deriving backpropagation signals through a competitive process involving a pair of networks. The representations that can be learned by GANs may be used in a variety of applications, including image synthesis, semantic image editing, style transfer, image super-resolution and classification. The aim of this review paper is to provide an overview of GANs for the signal processing community, drawing on familiar analogies and concepts where possible. In addition to identifying different methods for training and constructing GANs, we also point to remaining challenges in their theory and application.
translated by 谷歌翻译
从文本描述中综合现实图像是计算机视觉中的主要挑战。当前对图像合成方法的文本缺乏产生代表文本描述符的高分辨率图像。大多数现有的研究都依赖于生成的对抗网络(GAN)或变异自动编码器(VAE)。甘斯具有产生更清晰的图像的能力,但缺乏输出的多样性,而VAE擅长生产各种输出,但是产生的图像通常是模糊的。考虑到gan和vaes的相对优势,我们提出了一个新的有条件VAE(CVAE)和条件gan(CGAN)网络架构,用于合成以文本描述为条件的图像。这项研究使用条件VAE作为初始发电机来生成文本描述符的高级草图。这款来自第一阶段的高级草图输出和文本描述符被用作条件GAN网络的输入。第二阶段GAN产生256x256高分辨率图像。所提出的体系结构受益于条件加强和有条件的GAN网络的残留块,以实现结果。使用CUB和Oxford-102数据集进行了多个实验,并将所提出方法的结果与Stackgan等最新技术进行了比较。实验表明,所提出的方法生成了以文本描述为条件的高分辨率图像,并使用两个数据集基于Inception和Frechet Inception评分产生竞争结果
translated by 谷歌翻译
Many image-to-image translation problems are ambiguous, as a single input image may correspond to multiple possible outputs. In this work, we aim to model a distribution of possible outputs in a conditional generative modeling setting. The ambiguity of the mapping is distilled in a low-dimensional latent vector, which can be randomly sampled at test time. A generator learns to map the given input, combined with this latent code, to the output. We explicitly encourage the connection between output and the latent code to be invertible. This helps prevent a many-to-one mapping from the latent code to the output during training, also known as the problem of mode collapse, and produces more diverse results. We explore several variants of this approach by employing different training objectives, network architectures, and methods of injecting the latent code. Our proposed method encourages bijective consistency between the latent encoding and output modes. We present a systematic comparison of our method and other variants on both perceptual realism and diversity.
translated by 谷歌翻译
在这项工作中,我们研究了面部重建的问题,鉴于从黑框面部识别引擎中提取的面部特征表示。确实,由于引擎中抽象信息的局限性,在实践中,这是非常具有挑战性的问题。因此,我们在蒸馏框架(dab-gan)中引入了一种名为基于注意力的生成对抗网络的新方法,以合成受试者的面孔,鉴于其提取的面部识别功能。鉴于主题的任何不受约束的面部特征,Dab-Gan可以在高清上重建他/她的脸。 DAB-GAN方法包括一种新型的基于注意力的生成结构,采用新的定义的Bioxtive Metrics学习方法。该框架首先引入徒图,以便可以在图像域中直接采用距离测量和度量学习过程,以进行图像重建任务。来自Blackbox面部识别引擎的信息将使用全局蒸馏过程最佳利用。然后,提出了一个基于注意力的发电机,以使一个高度可靠的发电机通过ID保存综合逼真的面孔。我们已经评估了有关具有挑战性的面部识别数据库的方法,即Celeba,LF​​W,AgeDB,CFP-FP,并始终取得了最新的结果。 Dab-Gan的进步也得到了图像现实主义和ID保存属性的证明。
translated by 谷歌翻译
与CNN的分类,分割或对象检测相比,生成网络的目标和方法根本不同。最初,它们不是作为图像分析工具,而是生成自然看起来的图像。已经提出了对抗性训练范式来稳定生成方法,并已被证明是非常成功的 - 尽管绝不是第一次尝试。本章对生成对抗网络(GAN)的动机进行了基本介绍,并通​​过抽象基本任务和工作机制并得出了早期实用方法的困难来追溯其成功的道路。将显示进行更稳定的训练方法,也将显示出不良收敛及其原因的典型迹象。尽管本章侧重于用于图像生成和图像分析的gan,但对抗性训练范式本身并非特定于图像,并且在图像分析中也概括了任务。在将GAN与最近进入场景的进一步生成建模方法进行对比之前,将闻名图像语义分割和异常检测的架构示例。这将允许对限制的上下文化观点,但也可以对gans有好处。
translated by 谷歌翻译
One of the main challenges in deep learning-based underwater image enhancement is the limited availability of high-quality training data. Underwater images are difficult to capture and are often of poor quality due to the distortion and loss of colour and contrast in water. This makes it difficult to train supervised deep learning models on large and diverse datasets, which can limit the model's performance. In this paper, we explore an alternative approach to supervised underwater image enhancement. Specifically, we propose a novel unsupervised underwater image enhancement framework that employs a conditional variational autoencoder (cVAE) to train a deep learning model with probabilistic adaptive instance normalization (PAdaIN) and statistically guided multi-colour space stretch that produces realistic underwater images. The resulting framework is composed of a U-Net as a feature extractor and a PAdaIN to encode the uncertainty, which we call UDnet. To improve the visual quality of the images generated by UDnet, we use a statistically guided multi-colour space stretch module that ensures visual consistency with the input image and provides an alternative to training using a ground truth image. The proposed model does not need manual human annotation and can learn with a limited amount of data and achieves state-of-the-art results on underwater images. We evaluated our proposed framework on eight publicly-available datasets. The results show that our proposed framework yields competitive performance compared to other state-of-the-art approaches in quantitative as well as qualitative metrics. Code available at https://github.com/alzayats/UDnet .
translated by 谷歌翻译
Deep domain adaptation has emerged as a new learning technique to address the lack of massive amounts of labeled data. Compared to conventional methods, which learn shared feature subspaces or reuse important source instances with shallow representations, deep domain adaptation methods leverage deep networks to learn more transferable representations by embedding domain adaptation in the pipeline of deep learning. There have been comprehensive surveys for shallow domain adaptation, but few timely reviews the emerging deep learning based methods. In this paper, we provide a comprehensive survey of deep domain adaptation methods for computer vision applications with four major contributions. First, we present a taxonomy of different deep domain adaptation scenarios according to the properties of data that define how two domains are diverged. Second, we summarize deep domain adaptation approaches into several categories based on training loss, and analyze and compare briefly the state-of-the-art methods under these categories. Third, we overview the computer vision applications that go beyond image classification, such as face recognition, semantic segmentation and object detection. Fourth, some potential deficiencies of current methods and several future directions are highlighted.
translated by 谷歌翻译
With the development of convolutional neural networks, hundreds of deep learning based dehazing methods have been proposed. In this paper, we provide a comprehensive survey on supervised, semi-supervised, and unsupervised single image dehazing. We first discuss the physical model, datasets, network modules, loss functions, and evaluation metrics that are commonly used. Then, the main contributions of various dehazing algorithms are categorized and summarized. Further, quantitative and qualitative experiments of various baseline methods are carried out. Finally, the unsolved issues and challenges that can inspire the future research are pointed out. A collection of useful dehazing materials is available at \url{https://github.com/Xiaofeng-life/AwesomeDehazing}.
translated by 谷歌翻译
扩散概率模型(DPMS)在竞争对手GANS的图像生成中取得了显着的质量。但与GAN不同,DPMS使用一组缺乏语义含义的一组潜在变量,并且不能作为其他任务的有用表示。本文探讨了使用DPMS进行表示学习的可能性,并寻求通过自动编码提取输入图像的有意义和可解码的表示。我们的主要思想是使用可学习的编码器来发现高级语义,以及DPM作为用于建模剩余随机变化的解码器。我们的方法可以将任何图像编码为两部分潜在的代码,其中第一部分是语义有意义和线性的,第二部分捕获随机细节,允许接近精确的重建。这种功能使当前箔基于GaN的方法的挑战性应用,例如实际图像上的属性操作。我们还表明,这两级编码可提高去噪效率,自然地涉及各种下游任务,包括几次射击条件采样。
translated by 谷歌翻译
深度学习方法在图像染色中优于传统方法。为了生成上下文纹理,研究人员仍在努力改进现有方法,并提出可以提取,传播和重建类似于地面真实区域的特征的模型。此外,更深层的缺乏高质量的特征传递机制有助于对所产生的染色区域有助于持久的像差。为了解决这些限制,我们提出了V-Linknet跨空间学习策略网络。为了改善语境化功能的学习,我们设计了一种使用两个编码器的损失模型。此外,我们提出了递归残留过渡层(RSTL)。 RSTL提取高电平语义信息并将其传播为下层。最后,我们将在与不同面具的同一面孔和不同面部面上的相同面上进行了比较的措施。为了提高图像修复再现性,我们提出了一种标准协议来克服各种掩模和图像的偏差。我们使用实验方法调查V-LinkNet组件。当使用标准协议时,在Celeba-HQ上评估时,我们的结果超越了现有技术。此外,我们的模型可以在Paris Street View上评估时概括良好,以及具有标准协议的Parume2数据集。
translated by 谷歌翻译
横梁面部识别(CFR)旨在识别个体,其中比较面部图像源自不同的感测模式,例如红外与可见的。虽然CFR由于与模态差距相关的面部外观的显着变化,但CFR具有比经典的面部识别更具挑战性,但它在具有有限或挑战的照明的场景中,以及在呈现攻击的情况下,它是优越的。与卷积神经网络(CNNS)相关的人工智能最近的进展使CFR的显着性能提高了。由此激励,这项调查的贡献是三倍。我们提供CFR的概述,目标是通过首先正式化CFR然后呈现具体相关的应用来比较不同光谱中捕获的面部图像。其次,我们探索合适的谱带进行识别和讨论最近的CFR方法,重点放在神经网络上。特别是,我们提出了提取和比较异构特征以及数据集的重新访问技术。我们枚举不同光谱和相关算法的优势和局限性。最后,我们讨论了研究挑战和未来的研究线。
translated by 谷歌翻译
Our goal with this survey is to provide an overview of the state of the art deep learning technologies for face generation and editing. We will cover popular latest architectures and discuss key ideas that make them work, such as inversion, latent representation, loss functions, training procedures, editing methods, and cross domain style transfer. We particularly focus on GAN-based architectures that have culminated in the StyleGAN approaches, which allow generation of high-quality face images and offer rich interfaces for controllable semantics editing and preserving photo quality. We aim to provide an entry point into the field for readers that have basic knowledge about the field of deep learning and are looking for an accessible introduction and overview.
translated by 谷歌翻译