从早期图像处理到现代计算成像,成功的模型和算法都依赖于自然信号的基本属性:对称性。在这里,对称是指信号集的不变性属性,例如翻译,旋转或缩放等转换。对称性也可以以模棱两可的形式纳入深度神经网络中,从而可以进行更多的数据效率学习。虽然近年来端到端的图像分类网络的设计方面取得了重要进展,但计算成像引入了对等效网络解决方案的独特挑战,因为我们通常只通过一些嘈杂的不良反向操作员观察图像,可能不是均等的。我们回顾了现象成像的新兴领域,并展示它如何提供改进的概括和新成像机会。在此过程中,我们展示了采集物理学与小组动作之间的相互作用,以及与迭代重建,盲目的压缩感应和自我监督学习之间的联系。
translated by 谷歌翻译
深网络提供从医学成像到计算摄影的多重成像逆问题的最先进的性能。但是,大多数现有网络都是用清洁信号训练,这些信号通常很难或无法获得。近来的成像(EI)是最近的自我监督的学习框架,其利用信号分布中存在的组不变性,以仅从部分测量数据中学习重建功能。虽然EI结果令人印象深刻,但其性能随着噪音的增加而劣化。在本文中,我们提出了一种强大的成像(REI)框架,其可以学习从嘈杂的部分测量单独学习图像。该方法采用Stein的无偏见风险估算器(肯定)获得完全无偏见的训练损失,这是对噪声强大的。我们表明REI导致线性和非线性逆问题导致相当大的性能收益,从而为具有深网络的稳健无监督成像铺平了道路。代码可在:https://github.com/edongdongchen/rei。
translated by 谷歌翻译
物理驱动的深度学习方法已成为计算磁共振成像(MRI)问题的强大工具,将重建性能推向新限制。本文概述了将物理信息纳入基于学习的MRI重建中的最新发展。我们考虑了用于计算MRI的线性和非线性正向模型的逆问题,并回顾了解决这些方法的经典方法。然后,我们专注于物理驱动的深度学习方法,涵盖了物理驱动的损失功能,插件方法,生成模型和展开的网络。我们重点介绍了特定于领域的挑战,例如神经网络的实现和复杂值的构建基块,以及具有线性和非线性正向模型的MRI转换应用。最后,我们讨论常见问题和开放挑战,并与物理驱动的学习与医学成像管道中的其他下游任务相结合时,与物理驱动的学习的重要性联系在一起。
translated by 谷歌翻译
最近,由于高性能,深度学习方法已成为生物学图像重建和增强问题的主要研究前沿,以及其超快速推理时间。但是,由于获得监督学习的匹配参考数据的难度,对不需要配对的参考数据的无监督学习方法越来越兴趣。特别是,已成功用于各种生物成像应用的自我监督的学习和生成模型。在本文中,我们概述了在古典逆问题的背景下的连贯性观点,并讨论其对生物成像的应用,包括电子,荧光和去卷积显微镜,光学衍射断层扫描和功能性神经影像。
translated by 谷歌翻译
近年来,深度学习在图像重建方面取得了显着的经验成功。这已经促进了对关键用例中数据驱动方法的正确性和可靠性的精确表征的持续追求,例如在医学成像中。尽管基于深度学习的方法具有出色的性能和功效,但对其稳定性或缺乏稳定性的关注以及严重的实际含义。近年来,已经取得了重大进展,以揭示数据驱动的图像恢复方法的内部运作,从而挑战了其广泛认为的黑盒本质。在本文中,我们将为数据驱动的图像重建指定相关的融合概念,该概念将构成具有数学上严格重建保证的学习方法调查的基础。强调的一个例子是ICNN的作用,提供了将深度学习的力量与经典凸正则化理论相结合的可能性,用于设计被证明是融合的方法。这篇调查文章旨在通过提供对数据驱动的图像重建方法以及从业人员的理解,旨在通过提供可访问的融合概念的描述,并通过将一些现有的经验实践放在可靠的数学上,来推进我们对数据驱动图像重建方法的理解以及从业人员的了解。基础。
translated by 谷歌翻译
Deep neural networks provide unprecedented performance gains in many real world problems in signal and image processing. Despite these gains, future development and practical deployment of deep networks is hindered by their blackbox nature, i.e., lack of interpretability, and by the need for very large training sets. An emerging technique called algorithm unrolling or unfolding offers promise in eliminating these issues by providing a concrete and systematic connection between iterative algorithms that are used widely in signal processing and deep neural networks. Unrolling methods were first proposed to develop fast neural network approximations for sparse coding. More recently, this direction has attracted enormous attention and is rapidly growing both in theoretic investigations and practical applications. The growing popularity of unrolled deep networks is due in part to their potential in developing efficient, high-performance and yet interpretable network architectures from reasonable size training sets. In this article, we review algorithm unrolling for signal and image processing. We extensively cover popular techniques for algorithm unrolling in various domains of signal and image processing including imaging, vision and recognition, and speech processing. By reviewing previous works, we reveal the connections between iterative algorithms and neural networks and present recent theoretical results. Finally, we provide a discussion on current limitations of unrolling and suggest possible future research directions.
translated by 谷歌翻译
Despite great progress in simulating multiphysics problems using the numerical discretization of partial differential equations (PDEs), one still cannot seamlessly incorporate noisy data into existing algorithms, mesh generation remains complex, and high-dimensional problems governed by parameterized PDEs cannot be tackled. Moreover, solving inverse problems with hidden physics is often prohibitively expensive and requires different formulations and elaborate computer codes. Machine learning has emerged as a promising alternative, but training deep neural networks requires big data, not always available for scientific problems. Instead, such networks can be trained from additional information obtained by enforcing the physical laws (for example, at random points in the continuous space-time domain). Such physics-informed learning integrates (noisy) data and mathematical models, and implements them through neural networks or other kernel-based regression networks. Moreover, it may be possible to design specialized network architectures that automatically satisfy some of the physical invariants for better accuracy, faster training and improved generalization. Here, we review some of the prevailing trends in embedding physics into machine learning, present some of the current capabilities and limitations and discuss diverse applications of physics-informed learning both for forward and inverse problems, including discovering hidden physics and tackling high-dimensional problems.
translated by 谷歌翻译
在现代诊所中,医学成像至关重要,可以指导疾病的诊断和治疗。医学图像重建是医学成像的最基本和重要组成部分之一,其主要目的是以最低的成本和对患者的风险获取高质量的医学图像来临床使用。医学图像重建中的数学模型或更普遍的计算机视觉中的图像恢复一直在发挥重要作用。较早的数学模型主要是由人类知识或对要重建图像的假设设计的,我们将这些模型称为手工制作的模型。后来,手工制作的以及数据驱动的建模开始出现,这主要基于人类的设计,而从观察到的数据中学到了部分模型。最近,随着更多的数据和计算资源可用,基于深度学习的模型(或深度模型)将数据驱动的建模推向了极端,该模型主要基于以最小的人类设计为基础的学习。手工制作和数据驱动的建模都有自己的优势和缺点。医学成像的主要研究趋势之一是将手工制作的建模与深层建模相结合,以便我们可以从两种方法中享受好处。本文的主要部分是从展开的动态观点对一些有关深层建模的最新作品进行概念回顾。该观点通过优化算法和数值微分方程的灵感来刺激神经网络体系结构的新设计。鉴于深层建模的普及,该领域仍然存在巨大的挑战,以及我们将在本文结尾处讨论的机会。
translated by 谷歌翻译
近年来,在诸如denoing,压缩感应,介入和超分辨率等反问题中使用深度学习方法的使用取得了重大进展。尽管这种作品主要是由实践算法和实验驱动的,但它也引起了各种有趣的理论问题。在本文中,我们调查了这一作品中一些突出的理论发展,尤其是生成先验,未经训练的神经网络先验和展开算法。除了总结这些主题中的现有结果外,我们还强调了一些持续的挑战和开放问题。
translated by 谷歌翻译
We introduce Group equivariant Convolutional Neural Networks (G-CNNs), a natural generalization of convolutional neural networks that reduces sample complexity by exploiting symmetries. G-CNNs use G-convolutions, a new type of layer that enjoys a substantially higher degree of weight sharing than regular convolution layers. G-convolutions increase the expressive capacity of the network without increasing the number of parameters. Group convolution layers are easy to use and can be implemented with negligible computational overhead for discrete groups generated by translations, reflections and rotations. G-CNNs achieve state of the art results on CI-FAR10 and rotated MNIST.
translated by 谷歌翻译
传统上,信号处理,通信和控制一直依赖经典的统计建模技术。这种基于模型的方法利用代表基本物理,先验信息和其他领域知识的数学公式。简单的经典模型有用,但对不准确性敏感,当真实系统显示复杂或动态行为时,可能会导致性能差。另一方面,随着数据集变得丰富,现代深度学习管道的力量增加,纯粹的数据驱动的方法越来越流行。深度神经网络(DNNS)使用通用体系结构,这些架构学会从数据中运行,并表现出出色的性能,尤其是针对受监督的问题。但是,DNN通常需要大量的数据和巨大的计算资源,从而限制了它们对某些信号处理方案的适用性。我们对将原则数学模型与数据驱动系统相结合的混合技术感兴趣,以从两种方法的优势中受益。这种基于模型的深度学习方法通​​过为特定问题设计的数学结构以及从有限的数据中学习来利用这两个部分领域知识。在本文中,我们调查了研究和设计基于模型的深度学习系统的领先方法。我们根据其推理机制将基于混合模型/数据驱动的系统分为类别。我们对以系统的方式将基于模型的算法与深度学习以及具体指南和详细的信号处理示例相结合的领先方法进行了全面综述。我们的目的是促进对未来系统的设计和研究信号处理和机器学习的交集,这些系统结合了两个领域的优势。
translated by 谷歌翻译
Many scientific fields study data with an underlying structure that is a non-Euclidean space. Some examples include social networks in computational social sciences, sensor networks in communications, functional networks in brain imaging, regulatory networks in genetics, and meshed surfaces in computer graphics. In many applications, such geometric data are large and complex (in the case of social networks, on the scale of billions), and are natural targets for machine learning techniques. In particular, we would like to use deep neural networks, which have recently proven to be powerful tools for a broad range of problems from computer vision, natural language processing, and audio analysis. However, these tools have been most successful on data with an underlying Euclidean or grid-like structure, and in cases where the invariances of these structures are built into networks used to model them.Geometric deep learning is an umbrella term for emerging techniques attempting to generalize (structured) deep neural models to non-Euclidean domains such as graphs and manifolds. The purpose of this paper is to overview different examples of geometric deep learning problems and present available solutions, key difficulties, applications, and future research directions in this nascent field.
translated by 谷歌翻译
Recent years have witnessed a growth in mathematics for deep learning--which seeks a deeper understanding of the concepts of deep learning with mathematics, and explores how to make it more robust--and deep learning for mathematics, where deep learning algorithms are used to solve problems in mathematics. The latter has popularised the field of scientific machine learning where deep learning is applied to problems in scientific computing. Specifically, more and more neural network architectures have been developed to solve specific classes of partial differential equations (PDEs). Such methods exploit properties that are inherent to PDEs and thus solve the PDEs better than classical feed-forward neural networks, recurrent neural networks, and convolutional neural networks. This has had a great impact in the area of mathematical modeling where parametric PDEs are widely used to model most natural and physical processes arising in science and engineering, In this work, we review such methods and extend them for parametric studies as well as for solving the related inverse problems. We equally proceed to show their relevance in some industrial applications.
translated by 谷歌翻译
在许多现实世界中,只有不完整的测量数据可用于培训,这可能会带来学习重建功能的问题。实际上,通常不可能使用固定的不完整测量过程学习,因为测量运算符的无信息中没有信息。可以通过使用来自多个操作员的测量来克服此限制。尽管该想法已成功地应用于各种应用中,但仍缺乏对学习条件的精确表征。在本文中,我们通过提出必要和充分的条件来学习重建所需的基本信号模型,以指示不同测量运算符数量之间的相互作用,每个操作员的测量数量,模型的尺寸和尺寸之间的相互作用。信号。此外,我们提出了一个新颖且概念上简单的无监督学习损失,该损失仅需要访问不完整的测量数据,并在验证足够的条件时与受监督学习的表现达到相同的表现。我们通过一系列有关各种成像逆问题的实验,例如加速磁共振成像,压缩感测和图像介入,通过一系列实验来验证我们的理论界限,并证明了与以前的方法相比,提出的无监督损失的优势。
translated by 谷歌翻译
神经网络的经典发展主要集中在有限维欧基德空间或有限组之间的学习映射。我们提出了神经网络的概括,以学习映射无限尺寸函数空间之间的运算符。我们通过一类线性积分运算符和非线性激活函数的组成制定运营商的近似,使得组合的操作员可以近似复杂的非线性运算符。我们证明了我们建筑的普遍近似定理。此外,我们介绍了四类运算符参数化:基于图形的运算符,低秩运算符,基于多极图形的运算符和傅里叶运算符,并描述了每个用于用每个计算的高效算法。所提出的神经运营商是决议不变的:它们在底层函数空间的不同离散化之间共享相同的网络参数,并且可以用于零击超分辨率。在数值上,与现有的基于机器学习的方法,达西流程和Navier-Stokes方程相比,所提出的模型显示出卓越的性能,而与传统的PDE求解器相比,与现有的基于机器学习的方法有关的基于机器学习的方法。
translated by 谷歌翻译
生成建模旨在揭示产生观察到的数据的潜在因素,这些数据通常可以被建模为自然对称性,这些对称性是通过不变和对某些转型定律等效的表现出来的。但是,当前代表这些对称性的方法是在需要构建模棱两可矢量场的连续正式化流中所掩盖的 - 抑制了它们在常规的高维生成建模域(如自然图像)中的简单应用。在本文中,我们专注于使用离散层建立归一化流量。首先,我们从理论上证明了对紧凑空间的紧凑型组的模棱两可的图。我们进一步介绍了三个新的品牌流:$ g $ - 剩余的流量,$ g $ - 耦合流量和$ g $ - inverse自动回旋的回旋流量,可以提升经典的残留剩余,耦合和反向自动性流量,并带有等效的地图, $。从某种意义上说,我们证明$ g $ equivariant的差异性可以通过$ g $ - $ residual流量映射,我们的$ g $ - 剩余流量也很普遍。最后,我们首次在诸如CIFAR-10之类的图像数据集中对我们的理论见解进行了补充,并显示出$ G $ equivariant有限的有限流量,从而提高了数据效率,更快的收敛性和提高的可能性估计。
translated by 谷歌翻译
In this paper, we propose a novel deep convolutional neural network (CNN)-based algorithm for solving ill-posed inverse problems. Regularized iterative algorithms have emerged as the standard approach to ill-posed inverse problems in the past few decades. These methods produce excellent results, but can be challenging to deploy in practice due to factors including the high computational cost of the forward and adjoint operators and the difficulty of hyper parameter selection. The starting point of our work is the observation that unrolled iterative methods have the form of a CNN (filtering followed by point-wise non-linearity) when the normal operator (H * H, the adjoint of H times H) of the forward model is a convolution. Based on this observation, we propose using direct inversion followed by a CNN to solve normal-convolutional inverse problems. The direct inversion encapsulates the physical model of the system, but leads to artifacts when the problem is ill-posed; the CNN combines multiresolution decomposition and residual learning in order to learn to remove these artifacts while preserving image structure. We demonstrate the performance of the proposed network in sparse-view reconstruction (down to 50 views) on parallel beam X-ray computed tomography in synthetic phantoms as well as in real experimental sinograms. The proposed network outperforms total variation-regularized iterative reconstruction for the more realistic phantoms and requires less than a second to reconstruct a 512 × 512 image on the GPU. K.H. Jin acknowledges the support from the "EPFL Fellows" fellowship program co-funded by Marie Curie from the European Unions Horizon 2020 Framework Programme for Research and Innovation under grant agreement 665667.
translated by 谷歌翻译
地震数据处理在很大程度上取决于物理驱动的反问题的解决方案。在存在不利的数据采集条件下(例如,源和/或接收器的规则或不规则的粗略采样),基本的反问题变得非常不适,需要先进的信息才能获得令人满意的解决方案。刺激性反演,再加上固定基础的稀疏转换,代表了许多处理任务的首选方法,因为其实施简单性并在各种采集方案中都成功地应用了成功应用。利用深神经网络找到复杂的多维矢量空间的紧凑表示的能力,我们建议训练自动编码器网络,以了解输入地震数据和代表性潜流歧管之间的直接映射。随后,训练有素的解码器被用作手头物理驱动的逆问题的非线性预处理。提供了各种地震处理任务的合成数据和现场数据,并且所提出的非线性,学习的转换被证明超过了固定基本的转换,并更快地收敛到所寻求的解决方案。
translated by 谷歌翻译
我们提出了一种新颖的机器学习体系结构,双光谱神经网络(BNNS),用于学习数据的数据表示,这些数据是对定义信号的空间中组的行为不变的。该模型结合了双光谱的ANSATZ,这是一个完整的分析定义的组不变的,也就是说,它保留了所有信号结构,同时仅删除了由于组动作而造成的变化。在这里,我们证明了BNN能够在数据中发现任意的交换群体结构,并且训练有素的模型学习了组的不可减至表示,从而可以恢复组Cayley表。值得注意的是,受过训练的网络学会了对这些组的双偏见,因此具有分析对象的稳健性,完整性和通用性。
translated by 谷歌翻译
将对称性作为归纳偏置纳入神经网络体系结构已导致动态建模的概括,数据效率和身体一致性的提高。诸如CNN或e夫神经网络之类的方法使用重量绑定来强制执行对称性,例如偏移不变性或旋转率。但是,尽管物理定律遵守了许多对称性,但实际动力学数据很少符合严格的数学对称性,这是由于嘈杂或不完整的数据或基础动力学系统中的对称性破坏特征。我们探索近似模棱两可的网络,这些网络偏向于保存对称性,但并非严格限制这样做。通过放松的均衡约束,我们发现我们的模型可以胜过两个基线,而在模拟的湍流域和现实世界中的多流射流流中都没有对称性偏差和基线,并且具有过度严格的对称性。
translated by 谷歌翻译