目的:Shapley添加说明(SHAP)是一种流行的事后技术,用于解释黑匣子模型。尽管已经对数据不平衡对预测模型的影响进行了广泛的研究,但在基于Shap的模型解释方面,它在很大程度上仍然未知。这项研究试图研究数据不平衡对深度学习模型的Shap解释的影响,并提出一种减轻这些影响的策略。材料和方法:我们建议在解释黑匣子模型时在背景中调整类别的类别,并在形状中进行解释数据。我们的数据平衡策略是构成背景数据和解释数据,同等分布。为了评估数据调整对模型解释的影响,我们建议将Beeswarm图用作定性工具,以识别“异常”解释伪像,并定量测试可变重要性和预测能力之间的一致性。我们在一项实证研究中证明了我们提出的方法,该研究使用医学信息MART(MIMIC-III)数据预测住院死亡率和多层概念。结果:使用数据平衡策略将使我们能够减少蜜蜂图图中的工件数量,从而减轻数据不平衡的负面影响。此外,通过平衡策略,来自相应重要性排名的顶级变量表明歧视能力得到了改善。讨论和结论:我们的发现表明,平衡的背景和解释数据可以帮助减少偏斜的数据分布引起的解释结果中的噪声,并提高可变重要性排名的可靠性。此外,这些平衡程序提高了在临床应用中识别出异常特征的患者方面的可能性。
translated by 谷歌翻译
风险评分广泛用于临床决策,通常由逻辑回归模型产生。基于机器学习的方法可以很好地识别重要的预测因子,但这种“黑匣子”变量选择限制解释性,并且从单个模型评估的可变重要性可以偏置。我们提出了一种强大而可解释的可解释的可解释选择方法,使用最近开发的福利可变重要性云(福利维奇)占模型的可变性。我们的方法评估和可视化了深入推理和透明变量选择的总变量贡献,并过滤出非重要贡献者来简化模型构建步骤。我们从可变贡献中获得了一个集合变量排名,这很容易与自动化和模块化的风险分数发生器,自动摩托,以方便的实现。在对早期死亡或意外再入住的研究中,福糖选定了6个候选变量中的6个,以创建一个良好的性能,从机器学习的排名到一个16变量模型具有类似的性能。
translated by 谷歌翻译
对世界各地的急诊部门(ED)服务的需求不断增长,特别是在Covid-19大流行下。风险三环在优先考虑最需要它们的患者的有限医疗资源方面发挥着至关重要的作用。最近,普遍使用电子健康记录(EHR)已经产生了大量的存储数据,伴随着开发可改善紧急护理的预测模型的巨大机会。然而,没有基于大型公共EHR的广泛接受的ED基准,这是新的研究人员可以轻松访问的基准。填补这种差距的成功可以使研究人员更快,方便地开始研究,而无需详细数据预处理,并促进不同研究和方法之间的比较。在本文中,基于医疗信息MART为重症监护IV急诊部门(MIMIC-IV-ED)数据库,我们提出了一款公共ED基准套件,并获得了从2011年到2019年的50万ED访问的基准数据集。三个ed已经介绍了基于预测任务(住院,关键结果和72小时ED Revisit),其中实施了各种流行的方法,从机器学习方法到临床评分系统进行了实施。他们的性能结果评估并进行了比较。我们的代码是开源,因此任何具有访问模仿-IV-ED的人都可以遵循相同的数据处理步骤,构建基准,并重现实验。本研究提供了洞察力,建议,以及未来研究人员的协议,以处理原始数据并快速建立紧急护理模型。
translated by 谷歌翻译
大型多层神经网络的概括性能越来越兴趣,可以接受训练以达到零训练错误,同时对测试数据进行良好的推广。该制度被称为“第二次下降”,似乎与常规观点相矛盾,即最佳模型复杂性应反映出不足和过度拟合之间的最佳平衡,即偏见差异权衡。本文介绍了双重下降的VC理论分析,并表明可以通过经典的VC将军范围来充分解释。我们说明了分析性VC结合的应用,用于对分类问题进行两次下降进行建模,并使用多种学习方法(例如SVM,最小二乘正方形和多层观察者分类器)的经验结果。此外,我们讨论了对深度学习社区中VC理论结果误解的几个原因。
translated by 谷歌翻译
We study the ability of foundation models to learn representations for classification that are transferable to new, unseen classes. Recent results in the literature show that representations learned by a single classifier over many classes are competitive on few-shot learning problems with representations learned by special-purpose algorithms designed for such problems. We offer an explanation for this phenomenon based on the concept of class-features variability collapse, which refers to the training dynamics of deep classification networks where the feature embeddings of samples belonging to the same class tend to concentrate around their class means. More specifically, we examine the few-shot error of the learned feature map, which is the classification error of the nearest class-center classifier using centers learned from a small number of random samples from each class. Assuming that the classes appearing in the data are selected independently from a distribution, we show that the few-shot error generalizes from the training data to unseen test data, and we provide an upper bound on the expected few-shot error for new classes (selected from the same distribution) using the average few-shot error for the source classes. Additionally, we show that the few-shot error on the training data can be upper bounded using the degree of class-features variability collapse. This suggests that foundation models can provide feature maps that are transferable to new downstream tasks even with limited data available.
translated by 谷歌翻译
One of the main challenges in deep learning-based underwater image enhancement is the limited availability of high-quality training data. Underwater images are difficult to capture and are often of poor quality due to the distortion and loss of colour and contrast in water. This makes it difficult to train supervised deep learning models on large and diverse datasets, which can limit the model's performance. In this paper, we explore an alternative approach to supervised underwater image enhancement. Specifically, we propose a novel unsupervised underwater image enhancement framework that employs a conditional variational autoencoder (cVAE) to train a deep learning model with probabilistic adaptive instance normalization (PAdaIN) and statistically guided multi-colour space stretch that produces realistic underwater images. The resulting framework is composed of a U-Net as a feature extractor and a PAdaIN to encode the uncertainty, which we call UDnet. To improve the visual quality of the images generated by UDnet, we use a statistically guided multi-colour space stretch module that ensures visual consistency with the input image and provides an alternative to training using a ground truth image. The proposed model does not need manual human annotation and can learn with a limited amount of data and achieves state-of-the-art results on underwater images. We evaluated our proposed framework on eight publicly-available datasets. The results show that our proposed framework yields competitive performance compared to other state-of-the-art approaches in quantitative as well as qualitative metrics. Code available at https://github.com/alzayats/UDnet .
translated by 谷歌翻译
Graph neural networks (GNNs) have been shown to be highly sensitive to the choice of aggregation function. While summing over a node's neighbours can approximate any permutation-invariant function over discrete inputs, Cohen-Karlik et al. [2020] proved there are set-aggregation problems for which summing cannot generalise to unbounded inputs, proposing recurrent neural networks regularised towards permutation-invariance as a more expressive aggregator. We show that these results carry over to the graph domain: GNNs equipped with recurrent aggregators are competitive with state-of-the-art permutation-invariant aggregators, on both synthetic benchmarks and real-world problems. However, despite the benefits of recurrent aggregators, their $O(V)$ depth makes them both difficult to parallelise and harder to train on large graphs. Inspired by the observation that a well-behaved aggregator for a GNN is a commutative monoid over its latent space, we propose a framework for constructing learnable, commutative, associative binary operators. And with this, we construct an aggregator of $O(\log V)$ depth, yielding exponential improvements for both parallelism and dependency length while achieving performance competitive with recurrent aggregators. Based on our empirical observations, our proposed learnable commutative monoid (LCM) aggregator represents a favourable tradeoff between efficient and expressive aggregators.
translated by 谷歌翻译
This paper introduces the use of evolutionary algorithms for solving differential equations. The solution is obtained by optimizing a deep neural network whose loss function is defined by the residual terms from the differential equations. Recent studies have used stochastic gradient descent (SGD) variants to train these physics-informed neural networks (PINNs), but these methods can struggle to find accurate solutions due to optimization challenges. When solving differential equations, it is important to find the globally optimum parameters of the network, rather than just finding a solution that works well during training. SGD only searches along a single gradient direction, so it may not be the best approach for training PINNs with their accompanying complex optimization landscapes. In contrast, evolutionary algorithms perform a parallel exploration of different solutions in order to avoid getting stuck in local optima and can potentially find more accurate solutions. However, evolutionary algorithms can be slow, which can make them difficult to use in practice. To address this, we provide a set of five benchmark problems with associated performance metrics and baseline results to support the development of evolutionary algorithms for enhanced PINN training. As a baseline, we evaluate the performance and speed of using the widely adopted Covariance Matrix Adaptation Evolution Strategy (CMA-ES) for solving PINNs. We provide the loss and training time for CMA-ES run on TensorFlow, and CMA-ES and SGD run on JAX (with GPU acceleration) for the five benchmark problems. Our results show that JAX-accelerated evolutionary algorithms, particularly CMA-ES, can be a useful approach for solving differential equations. We hope that our work will support the exploration and development of alternative optimization algorithms for the complex task of optimizing PINNs.
translated by 谷歌翻译
Purpose: The aim of this study was to demonstrate the utility of unsupervised domain adaptation (UDA) in automated knee osteoarthritis (OA) phenotype classification using a small dataset (n=50). Materials and Methods: For this retrospective study, we collected 3,166 three-dimensional (3D) double-echo steady-state magnetic resonance (MR) images from the Osteoarthritis Initiative dataset and 50 3D turbo/fast spin-echo MR images from our institute (in 2020 and 2021) as the source and target datasets, respectively. For each patient, the degree of knee OA was initially graded according to the MRI Osteoarthritis Knee Score (MOAKS) before being converted to binary OA phenotype labels. The proposed UDA pipeline included (a) pre-processing, which involved automatic segmentation and region-of-interest cropping; (b) source classifier training, which involved pre-training phenotype classifiers on the source dataset; (c) target encoder adaptation, which involved unsupervised adaption of the source encoder to the target encoder and (d) target classifier validation, which involved statistical analysis of the target classification performance evaluated by the area under the receiver operating characteristic curve (AUROC), sensitivity, specificity and accuracy. Additionally, a classifier was trained without UDA for comparison. Results: The target classifier trained with UDA achieved improved AUROC, sensitivity, specificity and accuracy for both knee OA phenotypes compared with the classifier trained without UDA. Conclusion: The proposed UDA approach improves the performance of automated knee OA phenotype classification for small target datasets by utilising a large, high-quality source dataset for training. The results successfully demonstrated the advantages of the UDA approach in classification on small datasets.
translated by 谷歌翻译
Audio-visual speech recognition (AVSR) has gained remarkable success for ameliorating the noise-robustness of speech recognition. Mainstream methods focus on fusing audio and visual inputs to obtain modality-invariant representations. However, such representations are prone to over-reliance on audio modality as it is much easier to recognize than video modality in clean conditions. As a result, the AVSR model underestimates the importance of visual stream in face of noise corruption. To this end, we leverage visual modality-specific representations to provide stable complementary information for the AVSR task. Specifically, we propose a reinforcement learning (RL) based framework called MSRL, where the agent dynamically harmonizes modality-invariant and modality-specific representations in the auto-regressive decoding process. We customize a reward function directly related to task-specific metrics (i.e., word error rate), which encourages the MSRL to effectively explore the optimal integration strategy. Experimental results on the LRS3 dataset show that the proposed method achieves state-of-the-art in both clean and various noisy conditions. Furthermore, we demonstrate the better generality of MSRL system than other baselines when test set contains unseen noises.
translated by 谷歌翻译