相干显微镜技术提供了跨科学和技术领域的材料的无与伦比的多尺度视图,从结构材料到量子设备,从综合电路到生物细胞。在构造更明亮的来源和高速探测器的驱动下,连贯的X射线显微镜方法(如Ptychography)有望彻底改变纳米级材料的特征。但是,相关的数据和计算需求显着增加意味着,常规方法不再足以从高速相干成像实验实时恢复样品图像。在这里,我们演示了一个工作流程,该工作流利用边缘的人工智能和高性能计算,以实现直接从检测器直接从检测器流出的X射线ptychography数据实时反演。拟议的AI支持的工作流程消除了传统的Ptychography施加的采样约束,从而使用比传统方法所需的数据较少的数据级允许低剂量成像。
translated by 谷歌翻译
PtyChography是一种经过良好研究的相成像方法,可在纳米尺度上进行非侵入性成像。它已发展为主流技术,在材料科学或国防工业等各个领域具有各种应用。 PtyChography的一个主要缺点是由于相邻照明区域之间的高重叠要求以实现合理的重建,因此数据采集时间很长。扫描区域之间重叠的传统方法导致与文物的重建。在本文中,我们提出了从深层生成网络采样的数据中稀疏获得或不足采样的数据,以满足Ptychography的过采样要求。由于深度生成网络是预先训练的,并且可以在收集数据时计算其输出,因此可以减少实验数据和获取数据的时间。我们通过提出重建质量与先前提出的和传统方法相比,通过提出重建质量来验证该方法,并评论提出的方法的优势和缺点。
translated by 谷歌翻译
通过动态散射介质进行非侵入性光学成像具有许多重要的生物医学应用,但仍然是一项艰巨的任务。尽管标准弥漫成像方法测量光吸收或荧光发射,但也良好的是,散射的相干光的时间相关性通过组织像光强度一样扩散。然而,迄今为止,很少有作品旨在通过实验测量和处理这种时间相关数据,以证明去相关动力学的深度组织视频重建。在这项工作中,我们利用单光子雪崩二极管(SPAD)阵列摄像机同时监视单photon水平的斑点波动的时间动力学,从12种不同的幻影组织通过定制的纤维束阵列传递的位置。然后,我们应用深度神经网络将所获得的单光子测量值转换为迅速去摩擦组织幻像下散射动力学的视频。我们证明了重建瞬态(0.1-0.4s)动态事件的图像的能力,该动态事件发生在非相关的组织幻影下,并以毫米级分辨率进行重构,并突出显示我们的模型如何灵活地扩展到埋藏的phantom船只内的流速。
translated by 谷歌翻译
Noninvasive X-ray imaging of nanoscale three-dimensional objects, e.g. integrated circuits (ICs), generally requires two types of scanning: ptychographic, which is translational and returns estimates of complex electromagnetic field through ICs; and tomographic scanning, which collects complex field projections from multiple angles. Here, we present Attentional Ptycho-Tomography (APT), an approach trained to provide accurate reconstructions of ICs despite incomplete measurements, using a dramatically reduced amount of angular scanning. Training process includes regularizing priors based on typical IC patterns and the physics of X-ray propagation. We demonstrate that APT with 12-time reduced angles achieves fidelity comparable to the gold standard with the original set of angles. With the same set of reduced angles, APT also outperforms baseline reconstruction methods. In our experiments, APT achieves 108-time aggregate reduction in data acquisition and computation without compromising quality. We expect our physics-assisted machine learning framework could also be applied to other branches of nanoscale imaging.
translated by 谷歌翻译
信号处理是几乎任何传感器系统的基本组件,具有不同科学学科的广泛应用。时间序列数据,图像和视频序列包括可以增强和分析信息提取和量化的代表性形式的信号。人工智能和机器学习的最近进步正在转向智能,数据驱动,信号处理的研究。该路线图呈现了最先进的方法和应用程序的关键概述,旨在突出未来的挑战和对下一代测量系统的研究机会。它涵盖了广泛的主题,从基础到工业研究,以简明的主题部分组织,反映了每个研究领域的当前和未来发展的趋势和影响。此外,它为研究人员和资助机构提供了识别新前景的指导。
translated by 谷歌翻译
机器学习方法的最新进展以及扫描探针显微镜(SPMS)的可编程接口的新兴可用性使自动化和自动显微镜在科学界的关注方面推向了最前沿。但是,启用自动显微镜需要开发特定于任务的机器学习方法,了解物理发现与机器学习之间的相互作用以及完全定义的发现工作流程。反过来,这需要平衡领域科学家的身体直觉和先验知识与定义实验目标和机器学习算法的奖励,这些算法可以将它们转化为特定的实验协议。在这里,我们讨论了贝叶斯活跃学习的基本原理,并说明了其对SPM的应用。我们从高斯过程作为一种简单的数据驱动方法和对物理模型的贝叶斯推断作为基于物理功能的扩展的贝叶斯推断,再到更复杂的深内核学习方法,结构化的高斯过程和假设学习。这些框架允许使用先验数据,在光谱数据中编码的特定功能以及在实验过程中表现出的物理定律的探索。讨论的框架可以普遍应用于结合成像和光谱,SPM方法,纳米识别,电子显微镜和光谱法以及化学成像方法的所有技术,并且对破坏性或不可逆测量的影响特别影响。
translated by 谷歌翻译
斑块测定是量化复制能力裂解病毒体浓度的黄金标准方法。加快和自动化病毒斑块分析将显着受益于临床诊断,疫苗开发以及重组蛋白或抗病毒药的产生。在这里,我们使用无透明全息成像和深度学习提出了快速且无染色的定量病毒斑块测定法。这种具有成本效益,紧凑和自动化的设备可显着减少传统斑块测定所需的孵化时间,同时保留其优于其他病毒定量方法的优势。该设备以每次测试井的对象捕获〜0.32 Giga像素/小时的相位信息,以无标签的方式覆盖约30x30 mm^2的面积,完全消除了染色。我们使用Vero E6细胞和囊泡气孔病毒证明了这种计算方法的成功。使用神经网络,此无染色装置最早在孵育后5小时内自动检测到第一个细胞裂解事件,并以100%的形式达到了> 90%的检测率(PFU)与传统的斑块测定法相比,特异性在<20小时内,可节省大量时间,而传统的牙菌斑测定时间约为48小时或更长时间。该数据驱动的牙菌斑测定还提供了量化细胞单层感染区域的能力,比标准病毒斑块分析的病毒浓度大10倍,对PFU和病毒感染区域进行自动计数和定量。这种紧凑,低成本的自动PFU定量设备可以广泛用于病毒学研究,疫苗开发和临床应用
translated by 谷歌翻译
在X射线游离电子激光器(XFELS)处的单粒子成像(SPI)特别适合于确定其本地环境中颗粒的3D结构。对于成功的重建,必须从大量获取的图案中分离出来的衍射模式。我们建议将此任务作为图像分类问题制定,并使用卷积神经网络(CNN)架构来解决它。开发了两个CNN配置:一个最大化F1分数的CNN配置和强调高召回的一个配置。我们还将CNN与期望最大化(EM)选择以及尺寸过滤结合起来。我们观察到,我们的CNN选择在我们之前的工作中使用的电子选择的功率谱密度函数的对比度较低。但是,基于CNN的选择的重建提供了类似的结果。将CNN引入SPI实验允许简化重建管道,使研究人员能够在飞行中对模式进行分类,并且因此,它们使他们能够严格控制其实验的持续时间。我们认为,在描述的SPI分析工作流程中提出基于非标准的人工智能(AI)解决方案可能对SPI实验的未来发展有益。
translated by 谷歌翻译
从Linac Coohent Light Source(LCLS-II)和高级光子源升级(APS-U)等工具产生的数据中迅速提取可行的信息,由于高(最高(最高为TB/S)数据速率)变得越来越具有挑战性。常规的基于物理的信息检索方法很难快速检测有趣的事件,以便及时关注罕见事件或纠正错误。机器学习〜(ML)学习廉价替代分类器的方法是有希望的替代方法,但是当仪器或样品变化导致ML性能降解时可能会灾难性地失败。为了克服此类困难,我们提出了一个新的数据存储和ML模型培训体系结构,旨在组织大量的数据和模型,以便在检测到模型降解时,可以快速查询先验模型和/或数据。针对新条件进行了微调。我们表明,与当前最新的训练速度提高了200倍和92X端到端模型更新时间的速度相比,我们的方法最多可以达到100倍数据标记的速度。
translated by 谷歌翻译
解决纳米级的形态学化相变对各种学科的许多科学和工业应用至关重要。通过组合全场传输X射线显微镜(TXM)和X射线吸收附近边缘结构(XANES)的TXM-XANES成像技术是通过获取具有多能量X的一系列显微镜图像来操作的新兴工具 - 接合并配合以获得化学图。然而,由于系统误差和用于快速采集的低曝光照明,其能力受到差的信噪比差的限制。在这项工作中,通过利用TXM-XANES成像数据的内在属性和子空间建模,我们引入了一种简单且坚固的去噪方法来提高图像质量,这使得能够快速和高灵敏度的化学成像。对合成和实时数据集的广泛实验证明了该方法的优越性。
translated by 谷歌翻译
成像,散射和光谱是理解和发现新功能材料的基础。自动化和实验技术的当代创新导致这些测量更快,分辨率更高,从而产生了大量的分析数据。这些创新在用户设施和同步射击光源时特别明显。机器学习(ML)方法经常开发用于实时地处理和解释大型数据集。然而,仍然存在概念障碍,进入设施一般用户社区,通常缺乏ML的专业知识,以及部署ML模型的技术障碍。在此,我们展示了各种原型ML模型,用于在国家同步光源II(NSLS-II)的多个波束线上在飞行分析。我们谨慎地描述这些示例,专注于将模型集成到现有的实验工作流程中,使得读者可以容易地将它们自己的ML技术与具有普通基础设施的NSLS-II或设施的实验中的实验。此处介绍的框架展示了几乎没有努力,多样化的ML型号通过集成到实验编程和数据管理的现有Blueske套件中与反馈回路一起运行。
translated by 谷歌翻译
先进的可穿戴设备越来越多地利用高分辨率多摄像头系统。作为用于处理所得到的图像数据的最先进的神经网络是计算要求的,对于利用第五代(5G)无线连接和移动边缘计算,已经越来越感兴趣,以将该处理卸载到云。为了评估这种可能性,本文提出了一个详细的仿真和评估,用于5G无线卸载,用于对象检测,在一个名为Vis4ion的强大新型智能可穿戴物中,用于盲目损害(BVI)。目前的Vis4ion系统是一种具有高分辨率摄像机,视觉处理和触觉和音频反馈的仪表簿。本文认为将相机数据上载到移动边缘云以执行实时对象检测并将检测结果传输回可穿戴。为了确定视频要求,纸张评估视频比特率和分辨率对物体检测精度和范围的影响。利用与BVI导航相关的标记对象的新街道场景数据集进行分析。视觉评估与详细的全堆栈无线网络仿真结合,以确定吞吐量的分布和延迟,具有来自城市环境中的新高分辨率3D模型的实际导航路径和射线跟踪。为了比较,无线仿真考虑了标准的4G长期演进(LTE)载波和高速度5G毫米波(MMWAVE)载波。因此,该工作提供了对具有高带宽和低延迟要求的应用中的MMWAVE连接的边缘计算的彻底和现实评估。
translated by 谷歌翻译
We present a simple but novel hybrid approach to hyperspectral data cube reconstruction from computed tomography imaging spectrometry (CTIS) images that sequentially combines neural networks and the iterative Expectation Maximization (EM) algorithm. We train and test the ability of the method to reconstruct data cubes of $100\times100\times25$ and $100\times100\times100$ voxels, corresponding to 25 and 100 spectral channels, from simulated CTIS images generated by our CTIS simulator. The hybrid approach utilizes the inherent strength of the Convolutional Neural Network (CNN) with regard to noise and its ability to yield consistent reconstructions and make use of the EM algorithm's ability to generalize to spectral images of any object without training. The hybrid approach achieves better performance than both the CNNs and EM alone for seen (included in CNN training) and unseen (excluded from CNN training) cubes for both the 25- and 100-channel cases. For the 25 spectral channels, the improvements from CNN to the hybrid model (CNN + EM) in terms of the mean-squared errors are between 14-26%. For 100 spectral channels, the improvements between 19-40% are attained with the largest improvement of 40% for the unseen data, to which the CNNs are not exposed during the training.
translated by 谷歌翻译
神经网络在压缩体积数据以进行科学可视化方面表现出巨大的潜力。但是,由于训练和推断的高成本,此类体积神经表示仅应用于离线数据处理和非交互式渲染。在本文中,我们证明,通过同时利用现代的GPU张量核心,本地CUDA神经网络框架以及在线培训,我们可以使用体积神经表示来实现高性能和高效率交互式射线追踪。此外,我们的方法是完全概括的,可以适应时变的数据集。我们提出了三种用于在线培训的策略,每种策略都利用GPU,CPU和核心流程技术的不同组合。我们还开发了三个渲染实现,允许交互式射线跟踪与实时卷解码,示例流和幕后神经网络推断相结合。我们证明,我们的体积神经表示可以扩展到Terascale,以进行常规网格体积可视化,并可以轻松地支持不规则的数据结构,例如OpenVDB,非结构化,AMR和粒子体积数据。
translated by 谷歌翻译
Ever since the first microscope by Zacharias Janssen in the late 16th century, scientists have been inventing new types of microscopes for various tasks. Inventing a novel architecture demands years, if not decades, worth of scientific experience and creativity. In this work, we introduce Differentiable Microscopy ($\partial\mu$), a deep learning-based design paradigm, to aid scientists design new interpretable microscope architectures. Differentiable microscopy first models a common physics-based optical system however with trainable optical elements at key locations on the optical path. Using pre-acquired data, we then train the model end-to-end for a task of interest. The learnt design proposal can then be simplified by interpreting the learnt optical elements. As a first demonstration, based on the optical 4-$f$ system, we present an all-optical quantitative phase microscope (QPM) design that requires no computational post-reconstruction. A follow-up literature survey suggested that the learnt architecture is similar to the generalized phase contrast method developed two decades ago. Our extensive experiments on multiple datasets that include biological samples show that our learnt all-optical QPM designs consistently outperform existing methods. We experimentally verify the functionality of the optical 4-$f$ system based QPM design using a spatial light modulator. Furthermore, we also demonstrate that similar results can be achieved by an uninterpretable learning based method, namely diffractive deep neural networks (D2NN). The proposed differentiable microscopy framework supplements the creative process of designing new optical systems and would perhaps lead to unconventional but better optical designs.
translated by 谷歌翻译
计算机断层扫描(CT)是一种成像技术,其中以不同角度(称为投影或扫描)收集有关对象的信息。然后,通过解决反问题来产生显示切片的内部结构的横截面图像。受辐射剂量,投影角,产生的图像等某些因素的限制可能是嘈杂的或包含伪像的。受到《变形金刚在自然语言处理》中的成功的启发,这项初步研究的核心思想是将层析成像的投影视为单词令牌,而整个横截面(又称Sinogram)的整体扫描是在句子中作为句子。自然语言处理。然后,我们通过训练蒙版辛图模型(MSM)和微调MSM来探索基础模型的想法,以获取各种下游应用程序,包括数据集合限制(例如,光子预算)和数据驱动的解决方案,以近似于数据驱动的解决方案CT重建的逆问题。本研究中使用的模型和数据可在https://github.com/lzhengchun/tomotx上获得。
translated by 谷歌翻译
Multispectral imaging has been used for numerous applications in e.g., environmental monitoring, aerospace, defense, and biomedicine. Here, we present a diffractive optical network-based multispectral imaging system trained using deep learning to create a virtual spectral filter array at the output image field-of-view. This diffractive multispectral imager performs spatially-coherent imaging over a large spectrum, and at the same time, routes a pre-determined set of spectral channels onto an array of pixels at the output plane, converting a monochrome focal plane array or image sensor into a multispectral imaging device without any spectral filters or image recovery algorithms. Furthermore, the spectral responsivity of this diffractive multispectral imager is not sensitive to input polarization states. Through numerical simulations, we present different diffractive network designs that achieve snapshot multispectral imaging with 4, 9 and 16 unique spectral bands within the visible spectrum, based on passive spatially-structured diffractive surfaces, with a compact design that axially spans ~72 times the mean wavelength of the spectral band of interest. Moreover, we experimentally demonstrate a diffractive multispectral imager based on a 3D-printed diffractive network that creates at its output image plane a spatially-repeating virtual spectral filter array with 2x2=4 unique bands at terahertz spectrum. Due to their compact form factor and computation-free, power-efficient and polarization-insensitive forward operation, diffractive multispectral imagers can be transformative for various imaging and sensing applications and be used at different parts of the electromagnetic spectrum where high-density and wide-area multispectral pixel arrays are not widely available.
translated by 谷歌翻译
Cryo Focused Ion-Beam Scanning Electron Microscopy (cryo FIB-SEM) enables three-dimensional and nanoscale imaging of biological specimens via a slice and view mechanism. The FIB-SEM experiments are, however, limited by a slow (typically, several hours) acquisition process and the high electron doses imposed on the beam sensitive specimen can cause damage. In this work, we present a compressive sensing variant of cryo FIB-SEM capable of reducing the operational electron dose and increasing speed. We propose two Targeted Sampling (TS) strategies that leverage the reconstructed image of the previous sample layer as a prior for designing the next subsampling mask. Our image recovery is based on a blind Bayesian dictionary learning approach, i.e., Beta Process Factor Analysis (BPFA). This method is experimentally viable due to our ultra-fast GPU-based implementation of BPFA. Simulations on artificial compressive FIB-SEM measurements validate the success of proposed methods: the operational electron dose can be reduced by up to 20 times. These methods have large implications for the cryo FIB-SEM community, in which the imaging of beam sensitive biological materials without beam damage is crucial.
translated by 谷歌翻译
Video, as a key driver in the global explosion of digital information, can create tremendous benefits for human society. Governments and enterprises are deploying innumerable cameras for a variety of applications, e.g., law enforcement, emergency management, traffic control, and security surveillance, all facilitated by video analytics (VA). This trend is spurred by the rapid advancement of deep learning (DL), which enables more precise models for object classification, detection, and tracking. Meanwhile, with the proliferation of Internet-connected devices, massive amounts of data are generated daily, overwhelming the cloud. Edge computing, an emerging paradigm that moves workloads and services from the network core to the network edge, has been widely recognized as a promising solution. The resulting new intersection, edge video analytics (EVA), begins to attract widespread attention. Nevertheless, only a few loosely-related surveys exist on this topic. A dedicated venue for collecting and summarizing the latest advances of EVA is highly desired by the community. Besides, the basic concepts of EVA (e.g., definition, architectures, etc.) are ambiguous and neglected by these surveys due to the rapid development of this domain. A thorough clarification is needed to facilitate a consensus on these concepts. To fill in these gaps, we conduct a comprehensive survey of the recent efforts on EVA. In this paper, we first review the fundamentals of edge computing, followed by an overview of VA. The EVA system and its enabling techniques are discussed next. In addition, we introduce prevalent frameworks and datasets to aid future researchers in the development of EVA systems. Finally, we discuss existing challenges and foresee future research directions. We believe this survey will help readers comprehend the relationship between VA and edge computing, and spark new ideas on EVA.
translated by 谷歌翻译
We present a novel single-shot interferometric ToF camera targeted for precise 3D measurements of dynamic objects. The camera concept is based on Synthetic Wavelength Interferometry, a technique that allows retrieval of depth maps of objects with optically rough surfaces at submillimeter depth precision. In contrast to conventional ToF cameras, our device uses only off-the-shelf CCD/CMOS detectors and works at their native chip resolution (as of today, theoretically up to 20 Mp and beyond). Moreover, we can obtain a full 3D model of the object in single-shot, meaning that no temporal sequence of exposures or temporal illumination modulation (such as amplitude or frequency modulation) is necessary, which makes our camera robust against object motion. In this paper, we introduce the novel camera concept and show first measurements that demonstrate the capabilities of our system. We present 3D measurements of small (cm-sized) objects with > 2 Mp point cloud resolution (the resolution of our used detector) and up to sub-mm depth precision. We also report a "single-shot 3D video" acquisition and a first single-shot "Non-Line-of-Sight" measurement. Our technique has great potential for high-precision applications with dynamic object movement, e.g., in AR/VR, industrial inspection, medical imaging, and imaging through scattering media like fog or human tissue.
translated by 谷歌翻译