运营商网络已成为有希望的深度学习工具,用于近似偏微分方程(PDE)的解决方案。这些网络绘制了描述材料属性,迫使函数和边界数据的输入函数到PDE解决方案。这项工作描述了一种针对操作员网络的新体系结构,该架构模仿了从问题的变异公式或弱公式中获得的数值解决方案的形式。这些想法在通用椭圆的PDE中的应用导致变异模拟操作员网络(Varmion)。像常规的深层操作员网络(DeepOnet)一样,Varmion也由一个子网络组成,该子网络构建了输出的基础函数,另一个构造了这些基础函数系数的基本功能。但是,与deponet相反,在Varmion中,这些网络的体系结构是精确确定的。对Varmion解决方案中误差的分析表明,它包含训练数据中的误差,训练错误,抽样输入中的正交误差和输出功能的贡献,以及测量测试输入功能之间距离的“覆盖错误”以及培训数据集中最近的功能。这也取决于确切网络及其varmion近似的稳定性常数。 Varmion在规范椭圆形PDE中的应用表明,对于大约相同数量的网络参数,平均而言,Varmion的误差比标准DeepOnet较小。此外,其性能对于输入函数的变化,用于采样输入和输出功能的技术,用于构建基本函数的技术以及输入函数的数量更为强大。
translated by 谷歌翻译
脑小血管疾病的成像标记提供了有关脑部健康的宝贵信息,但是它们的手动评估既耗时又受到实质性内部和间际变异性的阻碍。自动化评级可能受益于生物医学研究以及临床评估,但是现有算法的诊断可靠性尚不清楚。在这里,我们介绍了\ textIt {血管病变检测和分割}(\ textit {v textit {where valdo?})挑战,该挑战是在国际医学图像计算和计算机辅助干预措施(MICCAI)的卫星事件中运行的挑战(MICCAI) 2021.这一挑战旨在促进大脑小血管疾病的小而稀疏成像标记的自动检测和分割方法的开发,即周围空间扩大(EPVS)(任务1),脑微粒(任务2)和预先塑造的鞋类血管起源(任务3),同时利用弱和嘈杂的标签。总体而言,有12个团队参与了针对一个或多个任务的解决方案的挑战(任务1 -EPVS 4,任务2 -Microbleeds的9个,任务3 -lacunes的6个)。多方数据都用于培训和评估。结果表明,整个团队和跨任务的性能都有很大的差异,对于任务1- EPV和任务2-微型微型且对任务3 -lacunes尚无实际的结果,其结果尤其有望。它还强调了可能阻止个人级别使用的情况的性能不一致,同时仍证明在人群层面上有用。
translated by 谷歌翻译
基于风险的积极学习是开发用于在线决策支持的统计分类器的方法。在这种方法中,根据初始数据点的完美信息的预期值来指导数据标签查询。对于SHM应用程序,根据维护决策过程评估信息的价值,并且数据标签查询对应于检查结构以确定其健康状态的检查。采样偏见是主动学习范式中的一个已知问题;当一个主动学习过程过多或未示例的特定区域时,就会发生这种情况,从而导致训练集不代表基础分布。这种偏见最终降低了决策绩效,因此导致不必要的费用。当前的论文概述了一种基于风险的主动学习方法,该方法利用了半监督的高斯混合模型。半监督的方法通过通过EM算法合并了未标记的数据来抵消采样偏差。该方法在SHM中发现的决策过程的数值示例中得到了证明。
translated by 谷歌翻译
获得对结构的操作和维护做出明智决定的能力,为实施结构健康监测(SHM)系统提供了动力。但是,与受监测系统的健康状态相对应的测量数据的描述性标签通常不可用。此问题限制了完全监督的机器学习范例的适用性,用于开发用于SHM系统决策支持的统计分类器。解决此问题的一种方法是基于风险的积极学习。在这种方法中,根据初始数据点的完美信息的预期值来指导数据标签查询。对于基于风险的SHM中的主动学习,可以根据维护决策过程评估信息的价值,并且数据标签查询对应于检查结构以确定其健康状态的检查。在SHM的背景下,仅考虑生成分类器的基于风险的主动学习。当前的论文展示了使用替代类型的分类器 - 判别模型的几个优点。在SHM决策支持的背景下,使用Z24桥数据集作为案例研究,歧视性分类器具有好处,包括改善对采样偏见的鲁棒性以及减少结构检查的支出。
translated by 谷歌翻译
分类模型是物理资产管理技术的基本组成部分,如结构健康监测(SHM)系统和数字双胞胎。以前的工作介绍了\ Texit {基于风险的主动学习},一种在线方法,用于开发考虑它们所应用的决策支持上下文的统计分类器。通过优先查询数据标签来考虑决策,根据\ Textit {完美信息的预期值}(EVPI)。虽然通过采用基于风险的主动学习方法获得了几种好处,但包括改进的决策性能,但算法遭受与引导查询过程的采样偏差有关的问题。这种采样偏差最终表现为在主动学习后的后期阶段的决策表现的下降,这又对应于丢失的资源/实用程序。目前的论文提出了两种新方法来抵消采样偏置的影响:\纺织{半监督学习},以及\ extentit {鉴别的分类模型}。首先使用合成数据集进行这些方法,然后随后应用于实验案例研究,具体地,Z24桥数据集。半监督学习方法显示有变量性能;具有稳健性,对采样偏置依赖于对每个数据集选择模型所选择的生成分布的适用性。相反,判别分类器被证明对采样偏压的影响具有优异的鲁棒性。此外,发现在监控运动期间进行的检查数,因此可以通过仔细选择决策支持监测系统中使用的统计分类器的仔细选择来减少。
translated by 谷歌翻译
制定和实施结构健康监测系统的主要动机是获得有关制定结构和维护结构和维护的能力的前景。遗憾的是,对于对应于感兴趣结构的健康状态信息的测量数据的描述性标签很少在监控系统之前可用。该问题限制了传统监督和无监督方法对机器学习的适用性,以便在统计分类机制下进行决策支持SHM系统。本文提出了一种基于风险的主动学习的制定,其中类标签信息的查询被每个初期数据点的所述信息的预期值引导。当应用于结构性健康监测时,可以将类标签查询映射到兴趣结构的检查中,以确定其健康状态。在本文中,通过代表数值示例解释和可视化基于风险的主动学习过程,随后应用于Z24桥梁基准。案例研究结果表明,通过统计分类器的基于风险的主动学习可以改善决策者的性能,从而考虑决策过程本身。
translated by 谷歌翻译
In the past years, deep learning has seen an increase of usage in the domain of histopathological applications. However, while these approaches have shown great potential, in high-risk environments deep learning models need to be able to judge their own uncertainty and be able to reject inputs when there is a significant chance of misclassification. In this work, we conduct a rigorous evaluation of the most commonly used uncertainty and robustness methods for the classification of Whole-Slide-Images under domain shift using the H\&E stained Camelyon17 breast cancer dataset. Although it is known that histopathological data can be subject to strong domain shift and label noise, to our knowledge this is the first work that compares the most common methods for uncertainty estimation under these aspects. In our experiments, we compare Stochastic Variational Inference, Monte-Carlo Dropout, Deep Ensembles, Test-Time Data Augmentation as well as combinations thereof. We observe that ensembles of methods generally lead to higher accuracies and better calibration and that Test-Time Data Augmentation can be a promising alternative when choosing an appropriate set of augmentations. Across methods, a rejection of the most uncertain tiles leads to a significant increase in classification accuracy on both in-distribution as well as out-of-distribution data. Furthermore, we conduct experiments comparing these methods under varying conditions of label noise. We observe that the border regions of the Camelyon17 dataset are subject to label noise and evaluate the robustness of the included methods against different noise levels. Lastly, we publish our code framework to facilitate further research on uncertainty estimation on histopathological data.
translated by 谷歌翻译
While the brain connectivity network can inform the understanding and diagnosis of developmental dyslexia, its cause-effect relationships have not yet enough been examined. Employing electroencephalography signals and band-limited white noise stimulus at 4.8 Hz (prosodic-syllabic frequency), we measure the phase Granger causalities among channels to identify differences between dyslexic learners and controls, thereby proposing a method to calculate directional connectivity. As causal relationships run in both directions, we explore three scenarios, namely channels' activity as sources, as sinks, and in total. Our proposed method can be used for both classification and exploratory analysis. In all scenarios, we find confirmation of the established right-lateralized Theta sampling network anomaly, in line with the temporal sampling framework's assumption of oscillatory differences in the Theta and Gamma bands. Further, we show that this anomaly primarily occurs in the causal relationships of channels acting as sinks, where it is significantly more pronounced than when only total activity is observed. In the sink scenario, our classifier obtains 0.84 and 0.88 accuracy and 0.87 and 0.93 AUC for the Theta and Gamma bands, respectively.
translated by 谷歌翻译
As various city agencies and mobility operators navigate toward innovative mobility solutions, there is a need for strategic flexibility in well-timed investment decisions in the design and timing of mobility service regions, i.e. cast as "real options" (RO). This problem becomes increasingly challenging with multiple interacting RO in such investments. We propose a scalable machine learning based RO framework for multi-period sequential service region design & timing problem for mobility-on-demand services, framed as a Markov decision process with non-stationary stochastic variables. A value function approximation policy from literature uses multi-option least squares Monte Carlo simulation to get a policy value for a set of interdependent investment decisions as deferral options (CR policy). The goal is to determine the optimal selection and timing of a set of zones to include in a service region. However, prior work required explicit enumeration of all possible sequences of investments. To address the combinatorial complexity of such enumeration, we propose a new variant "deep" RO policy using an efficient recurrent neural network (RNN) based ML method (CR-RNN policy) to sample sequences to forego the need for enumeration, making network design & timing policy tractable for large scale implementation. Experiments on multiple service region scenarios in New York City (NYC) shows the proposed policy substantially reduces the overall computational cost (time reduction for RO evaluation of > 90% of total investment sequences is achieved), with zero to near-zero gap compared to the benchmark. A case study of sequential service region design for expansion of MoD services in Brooklyn, NYC show that using the CR-RNN policy to determine optimal RO investment strategy yields a similar performance (0.5% within CR policy value) with significantly reduced computation time (about 5.4 times faster).
translated by 谷歌翻译
Remote sensing imagery provides comprehensive views of the Earth, where different sensors collect complementary data at different spatial scales. Large, pretrained models are commonly finetuned with imagery that is heavily augmented to mimic different conditions and scales, with the resulting models used for various tasks with imagery from a range of spatial scales. Such models overlook scale-specific information in the data. In this paper, we present Scale-MAE, a pretraining method that explicitly learns relationships between data at different, known scales throughout the pretraining process. Scale-MAE pretrains a network by masking an input image at a known input scale, where the area of the Earth covered by the image determines the scale of the ViT positional encoding, not the image resolution. Scale-MAE encodes the masked image with a standard ViT backbone, and then decodes the masked image through a bandpass filter to reconstruct low/high frequency images at lower/higher scales. We find that tasking the network with reconstructing both low/high frequency images leads to robust multiscale representations for remote sensing imagery. Scale-MAE achieves an average of a $5.0\%$ non-parametric kNN classification improvement across eight remote sensing datasets compared to current state-of-the-art and obtains a $0.9$ mIoU to $3.8$ mIoU improvement on the SpaceNet building segmentation transfer task for a range of evaluation scales.
translated by 谷歌翻译