稀疏编码已在视觉皮层的模型中纳入其计算优势和与生物学的连接。但是,稀疏程度如何在视觉任务上有助于表现,并不充分了解。在这项工作中,稀疏的编码已集成到现有的分层V2型号(Hosoya和Hyv \“Arinen,2015),但更换其独立的分量分析(ICA),具有明确的稀疏编码,其中可以控制稀疏程度。在训练之后,稀疏编码基础函数具有更高程度的稀疏性类似于定性不同的结构,例如曲线和角落。使用图像分类任务进行评估模型的贡献,特别是与中级视觉相关的任务,包括图 - 地面分类,纹理分类和两条线刺激之间的角度预测。此外,与v2(Freman等,2013)中报道的纹理敏感度量相比,评估模型(Freeman等,2013)和删除区域推理任务。该实验结果表明,同时在分类图像中比ICA差的稀疏编码差,只能稀疏编码能够更好地匹配纹理森通过提高稀疏编码的稀疏度,v2和推断删除图像区域的定位等级。在较大删除的图像区域上允许推断推断出更高程度的稀疏性。这里描述允许在稀疏编码中进行这种推理能力的机制。
translated by 谷歌翻译
We describe a model of visual processing in which feedback connections from a higher-to a lowerorder visual cortical area carry predictions of lower-level neural activities, whereas the feedforward connections carry the residual errors between the predictions and the actual lower-level activities. When exposed to natural images, a hierarchical network of model neurons implementing such a model developed simple-cell-like receptive fields. A subset of neurons responsible for carrying the residual errors showed endstopping and other extra-classical receptive-field effects. These results suggest that rather than being exclusively feedforward phenomena, nonclassical surround effects in the visual cortex may also result from cortico-cortical feedback as a consequence of the visual system using an efficient hierarchical strategy for encoding natural images.
translated by 谷歌翻译
The success of machine learning algorithms generally depends on data representation, and we hypothesize that this is because different representations can entangle and hide more or less the different explanatory factors of variation behind the data. Although specific domain knowledge can be used to help design representations, learning with generic priors can also be used, and the quest for AI is motivating the design of more powerful representation-learning algorithms implementing such priors. This paper reviews recent work in the area of unsupervised feature learning and deep learning, covering advances in probabilistic models, auto-encoders, manifold learning, and deep networks. This motivates longer-term unanswered questions about the appropriate objectives for learning good representations, for computing representations (i.e., inference), and the geometrical connections between representation learning, density estimation and manifold learning.
translated by 谷歌翻译
将早期视觉信号转换为v4中的曲率表示的机制是未知的。我们提出了一种分层模型,揭示了V1 / V2编码,该编码是对v4中报告的曲率表示的这种转换的基本组件。然后,通过放松单个高斯之前的经常施加的,在从猕猴V4响应的层次结构的最后一层中学习V4形选择性。我们发现V4电池与具有相似兴奋性和抑制贡献的接收领域的完整空间范围集成多个形状部分。我们的成果在V4神经元中发现了关于形状选择性的现有数据的新细节,通过进一步的实验可以提高我们对该领域的处理的理解。因此,我们提出了一种刺激装置的设计,该刺激装置允许在不干扰曲率信号的情况下消除形状部分以隔离部分贡献至V4响应。
translated by 谷歌翻译
预测性编码提供了对皮质功能的潜在统一说明 - 假设大脑的核心功能是最小化有关世界生成模型的预测错误。该理论与贝叶斯大脑框架密切相关,在过去的二十年中,在理论和认知神经科学领域都产生了重大影响。基于经验测试的预测编码的改进和扩展的理论和数学模型,以及评估其在大脑中实施的潜在生物学合理性以及该理论所做的具体神经生理学和心理学预测。尽管存在这种持久的知名度,但仍未对预测编码理论,尤其是该领域的最新发展进行全面回顾。在这里,我们提供了核心数学结构和预测编码的逻辑的全面综述,从而补充了文献中最新的教程。我们还回顾了该框架中的各种经典和最新工作,从可以实施预测性编码的神经生物学现实的微电路到预测性编码和广泛使用的错误算法的重新传播之间的紧密关系,以及对近距离的调查。预测性编码和现代机器学习技术之间的关系。
translated by 谷歌翻译
We explore an original strategy for building deep networks, based on stacking layers of denoising autoencoders which are trained locally to denoise corrupted versions of their inputs. The resulting algorithm is a straightforward variation on the stacking of ordinary autoencoders. It is however shown on a benchmark of classification problems to yield significantly lower classification error, thus bridging the performance gap with deep belief networks (DBN), and in several cases surpassing it. Higher level representations learnt in this purely unsupervised fashion also help boost the performance of subsequent SVM classifiers. Qualitative experiments show that, contrary to ordinary autoencoders, denoising autoencoders are able to learn Gabor-like edge detectors from natural image patches and larger stroke detectors from digit images. This work clearly establishes the value of using a denoising criterion as a tractable unsupervised objective to guide the learning of useful higher level representations.
translated by 谷歌翻译
神经生成模型可用于学习从数据的复杂概率分布,从它们中进行采样,并产生概率密度估计。我们提出了一种用于开发由大脑预测处理理论启发的神经生成模型的计算框架。根据预测加工理论,大脑中的神经元形成一个层次结构,其中一个级别的神经元形成关于来自另一个层次的感觉输入的期望。这些神经元根据其期望与观察到的信号之间的差异更新其本地模型。以类似的方式,我们的生成模型中的人造神经元预测了邻近的神经元的作用,并根据预测匹配现实的程度来调整它们的参数。在这项工作中,我们表明,在我们的框架内学到的神经生成模型在练习中跨越多个基准数据集和度量来表现良好,并且保持竞争或显着优于具有类似功能的其他生成模型(例如变形自动编码器)。
translated by 谷歌翻译
Image representations, from SIFT and Bag of Visual Words to Convolutional Neural Networks (CNNs), are a crucial component of almost any image understanding system. Nevertheless, our understanding of them remains limited. In this paper we conduct a direct analysis of the visual information contained in representations by asking the following question: given an encoding of an image, to which extent is it possible to reconstruct the image itself? To answer this question we contribute a general framework to invert representations. We show that this method can invert representations such as HOG and SIFT more accurately than recent alternatives while being applicable to CNNs too. We then use this technique to study the inverse of recent state-of-the-art CNN image representations for the first time. Among our findings, we show that several layers in CNNs retain photographically accurate information about the image, with different degrees of geometric and photometric invariance.
translated by 谷歌翻译
深度神经网络在计算机视觉中的许多任务中设定了最先进的,但它们的概括对象扭曲的能力令人惊讶地是脆弱的。相比之下,哺乳动物视觉系统对广泛的扰动是强大的。最近的工作表明,这种泛化能力可以通过在整个视觉皮层中的视觉刺激的表示中编码的有用的电感偏差来解释。在这里,我们成功利用了多任务学习方法的这些归纳偏差:我们共同训练了深度网络以进行图像分类并预测猕猴初级视觉皮层(V1)中的神经活动。我们通过测试其对图像扭曲的鲁棒性来衡量我们网络的分发广泛性能力。我们发现,尽管在训练期间没有这些扭曲,但猴子V1数据的共同训练导致鲁棒性增加。此外,我们表明,我们的网络的鲁棒性非常接近Oracle网络的稳定性,其中架构的部分在嘈杂的图像上直接培训。我们的结果还表明,随着鲁布利的改善,网络的表示变得更加大脑。使用新颖的约束重建分析,我们调查了我们的大脑正规网络更加强大的原因。与我们仅对图像分类接受培训的基线网络相比,我们的共同训练网络对内容比噪声更敏感。使用深度预测的显着性图,用于想象成像图像,我们发现我们的猴子共同训练的网络对场景中的突出区域倾向更敏感,让人想起V1在对象边界的检测中的作用和自下而上的角色显着性。总体而言,我们的工作扩大了从大脑转移归纳偏见的有前途的研究途径,并为我们转移的影响提供了新的分析。
translated by 谷歌翻译
从随机字段或纹理中提取信息是科学中无处不在的任务,从探索性数据分析到分类和参数估计。从物理学到生物学,它往往通过功率谱分析来完成,这通常过于有限,或者使用需要大型训练的卷积神经网络(CNNS)并缺乏解释性。在本文中,我们倡导使用散射变换(Mallat 2012),这是一种强大的统计数据,它来自CNNS的数学思想,但不需要任何培训,并且是可解释的。我们表明它提供了一种相对紧凑的汇总统计数据,具有视觉解释,并在广泛的科学应用中携带大多数相关信息。我们向该估算者提供了非技术性介绍,我们认为它可以使数据分析有利于多种科学领域的模型和参数推断。有趣的是,了解散射变换的核心操作允许人们解读CNN的内部工作的许多关键方面。
translated by 谷歌翻译
稀疏编码与$ l_1 $罚化和学习的线性词典需要正规化字典以防止$ l_1 $ norms的代码中的崩溃。通常,此正则化需要绑定字典元素的欧几里德规范。在这项工作中,我们提出了一种新颖的稀疏编码协议,其防止代码中的崩溃,而无需正常化解码器。我们的方法直接正规化代码,使每个潜在代码组件具有大于固定阈值的差异,而不是给定一组输入集的一组稀疏表示。此外,我们探讨有效地利用多层解码器培训稀疏编码系统的方法,因为它们可以模拟比线性词典更复杂的关系。在我们的MNIST和自然形象补丁的实验中,我们表明,通过我们的方法学习的解码器具有在线性和多层外壳中的可解释特征。此外,我们显示使用我们的方差正则化方法训练的多层解码器具有多层解码器的稀疏自动置分机,与具有线性词典的自动码器相比,使用稀疏表示具有稀疏表示的更高质量的重建。此外,通过我们的差异正规化方法获得的稀疏表示可用于低数据制度的去噪和分类的下游任务。
translated by 谷歌翻译
Deep convolutional networks have proven to be very successful in learning task specific features that allow for unprecedented performance on various computer vision tasks. Training of such networks follows mostly the supervised learning paradigm, where sufficiently many input-output pairs are required for training. Acquisition of large training sets is one of the key challenges, when approaching a new task. In this paper, we aim for generic feature learning and present an approach for training a convolutional network using only unlabeled data. To this end, we train the network to discriminate between a set of surrogate classes. Each surrogate class is formed by applying a variety of transformations to a randomly sampled 'seed' image patch. In contrast to supervised network training, the resulting feature representation is not class specific. It rather provides robustness to the transformations that have been applied during training. This generic feature representation allows for classification results that outperform the state of the art for unsupervised learning on several popular datasets . While such generic features cannot compete with class specific features from supervised training on a classification task, we show that they are advantageous on geometric matching problems, where they also outperform the SIFT descriptor.
translated by 谷歌翻译
Current learning machines have successfully solved hard application problems, reaching high accuracy and displaying seemingly "intelligent" behavior. Here we apply recent techniques for explaining decisions of state-of-the-art learning machines and analyze various tasks from computer vision and arcade games. This showcases a spectrum of problem-solving behaviors ranging from naive and short-sighted, to wellinformed and strategic. We observe that standard performance evaluation metrics can be oblivious to distinguishing these diverse problem solving behaviors. Furthermore, we propose our semi-automated Spectral Relevance Analysis that provides a practically effective way of characterizing and validating the behavior of nonlinear learning machines. This helps to assess whether a learned model indeed delivers reliably for the problem that it was conceived for. Furthermore, our work intends to add a voice of caution to the ongoing excitement about machine intelligence and pledges to evaluate and judge some of these recent successes in a more nuanced manner.
translated by 谷歌翻译
与古典浅表示学习技术相比,深神经网络在几乎每个应用基准中都实现了卓越的性能。但尽管他们明确的经验优势,但它仍然没有很好地理解,是什么让他们如此有效。为了解决这个问题,我们引入了深度框架近似:用结构化超常帧的受限表示学习的统一框架。虽然精确推断需要迭代优化,但是可以通过前馈深神经网络的操作来近似。我们间接分析模型容量如何涉及由架构超参数,如深度,宽度和跳过连接引起的帧结构。我们通过深度框架电位量化这些结构差异,与表示唯一性和稳定性相关的数据无关的相干措施。作为模型选择的标准,我们将与各种常见的深网络架构和数据集的泛化误差显示相关性。我们还证明了实现迭代优化算法的复发网络如何实现与其前馈近似的性能相当,同时提高对抗鲁棒性。这种与既定的过度符合表达理论的联系表明,具有较少对临时工程依赖的原则深网络架构设计的新方向。
translated by 谷歌翻译
Deep neural networks provide unprecedented performance gains in many real world problems in signal and image processing. Despite these gains, future development and practical deployment of deep networks is hindered by their blackbox nature, i.e., lack of interpretability, and by the need for very large training sets. An emerging technique called algorithm unrolling or unfolding offers promise in eliminating these issues by providing a concrete and systematic connection between iterative algorithms that are used widely in signal processing and deep neural networks. Unrolling methods were first proposed to develop fast neural network approximations for sparse coding. More recently, this direction has attracted enormous attention and is rapidly growing both in theoretic investigations and practical applications. The growing popularity of unrolled deep networks is due in part to their potential in developing efficient, high-performance and yet interpretable network architectures from reasonable size training sets. In this article, we review algorithm unrolling for signal and image processing. We extensively cover popular techniques for algorithm unrolling in various domains of signal and image processing including imaging, vision and recognition, and speech processing. By reviewing previous works, we reveal the connections between iterative algorithms and neural networks and present recent theoretical results. Finally, we provide a discussion on current limitations of unrolling and suggest possible future research directions.
translated by 谷歌翻译
这是一门专门针对STEM学生开发的介绍性机器学习课程。我们的目标是为有兴趣的读者提供基础知识,以在自己的项目中使用机器学习,并将自己熟悉术语作为进一步阅读相关文献的基础。在这些讲义中,我们讨论受监督,无监督和强化学习。注释从没有神经网络的机器学习方法的说明开始,例如原理分析,T-SNE,聚类以及线性回归和线性分类器。我们继续介绍基本和先进的神经网络结构,例如密集的进料和常规神经网络,经常性的神经网络,受限的玻尔兹曼机器,(变性)自动编码器,生成的对抗性网络。讨论了潜在空间表示的解释性问题,并使用梦和对抗性攻击的例子。最后一部分致力于加强学习,我们在其中介绍了价值功能和政策学习的基本概念。
translated by 谷歌翻译
神经网络经常将许多无关的概念包装到一个神经元中 - 一种令人困惑的现象被称为“多疾病”,这使解释性更具挑战性。本文提供了一个玩具模型,可以完全理解多义,这是由于模型在“叠加”中存储其他稀疏特征的结果。我们证明了相变的存在,与均匀多型的几何形状的令人惊讶的联系以及与对抗性例子联系的证据。我们还讨论了对机械解释性的潜在影响。
translated by 谷歌翻译
手写数字识别(HDR)是光学特征识别(OCR)领域中最具挑战性的任务之一。不管语言如何,HDR都存在一些固有的挑战,这主要是由于个人跨个人的写作风格的变化,编写媒介和环境的变化,无法在反复编写任何数字等时保持相同的笔触。除此之外,特定语言数字的结构复杂性可能会导致HDR的模棱两可。多年来,研究人员开发了许多离线和在线HDR管道,其中不同的图像处理技术与传统的机器学习(ML)基于基于的和/或基于深度学习(DL)的体系结构相结合。尽管文献中存在有关HDR的广泛审查研究的证据,例如:英语,阿拉伯语,印度,法尔西,中文等,但几乎没有对孟加拉人HDR(BHDR)的调查,这缺乏对孟加拉语HDR(BHDR)的研究,而这些调查缺乏对孟加拉语HDR(BHDR)的研究。挑战,基础识别过程以及可能的未来方向。在本文中,已经分析了孟加拉语手写数字的特征和固有的歧义,以及二十年来最先进的数据集的全面见解和离线BHDR的方法。此外,还详细讨论了一些涉及BHDR的现实应用特定研究。本文还将作为对离线BHDR背后科学感兴趣的研究人员的汇编,煽动了对相关研究的新途径的探索,这可能会进一步导致在不同应用领域对孟加拉语手写数字进行更好的离线认识。
translated by 谷歌翻译
Visual representations can be defined as the activations of neuronal populations in response to images. The activation of a neuron as a function over all image space has been described as a "tuning landscape". As a function over a high-dimensional space, what is the structure of this landscape? In this study, we characterize tuning landscapes through the lens of level sets and Morse theory. A recent study measured the in vivo two-dimensional tuning maps of neurons in different brain regions. Here, we developed a statistically reliable signature for these maps based on the change of topology in level sets. We found this topological signature changed progressively throughout the cortical hierarchy, with similar trends found for units in convolutional neural networks (CNNs). Further, we analyzed the geometry of level sets on the tuning landscapes of CNN units. We advanced the hypothesis that higher-order units can be locally regarded as isotropic radial basis functions, but not globally. This shows the power of level sets as a conceptual tool to understand neuronal activations over image space.
translated by 谷歌翻译
Explainable AI transforms opaque decision strategies of ML models into explanations that are interpretable by the user, for example, identifying the contribution of each input feature to the prediction at hand. Such explanations, however, entangle the potentially multiple factors that enter into the overall complex decision strategy. We propose to disentangle explanations by finding relevant subspaces in activation space that can be mapped to more abstract human-understandable concepts and enable a joint attribution on concepts and input features. To automatically extract the desired representation, we propose new subspace analysis formulations that extend the principle of PCA and subspace analysis to explanations. These novel analyses, which we call principal relevant component analysis (PRCA) and disentangled relevant subspace analysis (DRSA), optimize relevance of projected activations rather than the more traditional variance or kurtosis. This enables a much stronger focus on subspaces that are truly relevant for the prediction and the explanation, in particular, ignoring activations or concepts to which the prediction model is invariant. Our approach is general enough to work alongside common attribution techniques such as Shapley Value, Integrated Gradients, or LRP. Our proposed methods show to be practically useful and compare favorably to the state of the art as demonstrated on benchmarks and three use cases.
translated by 谷歌翻译