人们对人类情感状态的稀疏代表性格式的需求日益增长,这些格式可以在有限的计算记忆资源的情况下使用。我们探讨了在潜在矢量空间中代表神经数据对情绪刺激的响应是否可以用于预测情绪状态,并生成参与者和/或情绪特定于情绪的合成EEG数据。我们提出了一个有条件的基于变异自动编码器的框架EEG2VEC,以从脑电图数据中学习生成歧视性表示。关于情感脑电图记录数据集的实验结果表明,我们的模型适用于无监督的脑电图建模,基于潜在表示的三个不同情绪类别(正,中性,负)的分类,可实现68.49%的稳健性能,并产生的合成eeg序列共同存在于真实的脑电图数据输入到特别重建低频信号组件。我们的工作推进了情感脑电图表示可以在例如生成人工(标签)训练数据或减轻手动功能提取的领域,并为记忆约束的边缘计算应用程序提供效率。
translated by 谷歌翻译
使用生成的对抗神经网络和更精确的周期内,无监督和不配对的域翻译是组织病理学图像的染色翻译的最新技术。然而,它通常遭受循环一致但非结构保存错误的存在。我们为一组方法提出了一种替代方法,该方法依赖于分割一致性,可以保留病理结构。专注于免疫组织化学(IHC)和多重免疫荧光(MIF),我们引入了一种简单而有效的指导方案,作为一种损失函数,以利用污渍翻译和染色隔离的一致性。定性和定量实验显示了提出的方法改善两个域之间翻译的能力。
translated by 谷歌翻译
最先进的文本分类器的大尺寸和复杂的决策机制使人类难以理解他们的预测,导致用户缺乏信任。这些问题导致采用Shail和集成梯度等方法来解释分类决策,通过为输入令牌分配重要性分数。然而,使用不同的随机化测试之前的工作表明,通过这些方法产生的解释可能不具有稳健性。例如,对测试集的相同预测的模型可能仍然导致不同的特征重要性排名。为了解决基于令牌的可解释性缺乏稳健性,我们探讨了句子等更高语义层面的解释。我们使用计算指标和人类主题研究来比较基于令牌的句子的解释的质量。我们的实验表明,更高级别的特征属性提供了几个优点:1)由于随机化测试测量,2)当使用近似的基于方法等诸如Shav等的方法来说,它们更加强大,并且3)它们更容易理解在语言相干性在更高的粒度水平上存在的情况下的人类。基于这些调查结果,我们表明,令牌的可解释性,同时是鉴于ML模型的输入接口的方便的首选,不是所有情况中最有效的选择。
translated by 谷歌翻译
脑转移经常发生在转移性癌症的患者中。早期和准确地检测脑转移对于放射治疗的治疗计划和预后至关重要。为了提高深入学习的脑转移检测性能,提出了一种称为体积级灵敏度特异性(VSS)的定制检测损失,该损失是单个转移检测灵敏度和(子)体积水平的特异性。作为敏感性和精度始终在转移水平中始终是折射率,可以通过调节VSS损耗中的重量而无需骰子分数系数进行分段转移来实现高精度或高精度。为了减少被检测为假阳性转移的转移样结构,提出了一种时间的现有量作为神经网络的额外输入。我们提出的VSS损失提高了脑转移检测的敏感性,将灵敏度提高了86.7%至95.5%。或者,它将精度提高了68.8%至97.8%。随着额外的时间现有量,在高灵敏度模型中,约45%的假阳性转移减少,高特异性模型的精度达到99.6%。所有转移的平均骰子系数约为0.81。随着高灵敏度和高特异性模型的集合,平均每位患者的1.5个假阳性转移需要进一步检查,而大多数真正的阳性转移确认。该集合学习能够区分从需要特殊专家审查或进一步跟进的转移候选人的高信心真正的阳性转移,特别适合实际临床实践中专家支持的要求。
translated by 谷歌翻译
概率数值方法(PNMS)通过概率推断解决数值问题。它们已开发用于线性代数,优化,集成和微分方程模拟。PNMS自然地纳入了关于问题的先前信息,并通过有限计算资源以及随机输入来量化不确定性。在本文中,我们提出了probnum:提供最先进的概率数值求解器的Python库。Probnum通过模块化设计以及包装器,可以通过模块化设计来定制PNMS的定制组成,以供自卸使用。在线,在线,文档,开发人员指南和基准,请访问www.probnum.org。
translated by 谷歌翻译
具有微分方程的机械模型是机器学习科学应用的关键组成部分。这种模型中的推论通常在计算上是要求的,因为它涉及重复求解微分方程。这里的主要问题是数值求解器很难与标准推理技术结合使用。概率数字中的最新工作已经开发了一类新的用于普通微分方程(ODE)的求解器,该方程式直接用贝叶斯过滤词来表达解决方案过程。我们在这里表明,这允许将此类方法与概念和数值易于宽容地结合在一起,并在ODE本身中与潜在力模型结合在一起。然后,可以在潜在力和ode溶液上执行近似贝叶斯推断,并在一个线性复杂度传递中进行扩展的卡尔曼滤波器 /更平滑的线性复杂度,也就是说,以计算单个ODE解决方案为代价。我们通过培训表明了算法的表达和性能,以及其他训练中的非参数SIRD模型。
translated by 谷歌翻译
纵向生物医学数据通常是稀疏时间网格和个体特定发展模式的特征。具体而言,在流行病学队列研究和临床登记处,我们面临的问题是在研究早期阶段中可以从数据中学到的问题,只有基线表征和一个后续测量。灵感来自最近的进步,允许将深度学习与动态建模相结合,我们调查这些方法是否可用于揭示复杂结构,特别是对于每个单独的两个观察时间点的极端小数据设置。然后,通过利用个体的相似性,可以使用不规则间距来获得有关个体动态的更多信息。我们简要概述了变形的自动化器(VAES)如何作为深度学习方法,可以与普通微分方程(ODES)相关联用于动态建模,然后具体研究这种方法的可行性,即提供个人特定的潜在轨迹的方法通过包括规律性假设和个人的相似性。我们还提供了对这种深度学习方法的描述作为过滤任务,以提供统计的视角。使用模拟数据,我们展示了方法可以在多大程度上从多大程度上恢复具有两个和四个未知参数的颂歌系统的单个轨迹,以及使用具有类似轨迹的个体群体,以及其崩溃的地方。结果表明,即使在极端的小数据设置中,这种动态深度学习方法也可能是有用的,但需要仔细调整。
translated by 谷歌翻译
The release of ChatGPT, a language model capable of generating text that appears human-like and authentic, has gained significant attention beyond the research community. We expect that the convincing performance of ChatGPT incentivizes users to apply it to a variety of downstream tasks, including prompting the model to simplify their own medical reports. To investigate this phenomenon, we conducted an exploratory case study. In a questionnaire, we asked 15 radiologists to assess the quality of radiology reports simplified by ChatGPT. Most radiologists agreed that the simplified reports were factually correct, complete, and not potentially harmful to the patient. Nevertheless, instances of incorrect statements, missed key medical findings, and potentially harmful passages were reported. While further studies are needed, the initial insights of this study indicate a great potential in using large language models like ChatGPT to improve patient-centered care in radiology and other medical domains.
translated by 谷歌翻译
We consider a semi-supervised $k$-clustering problem where information is available on whether pairs of objects are in the same or in different clusters. This information is either available with certainty or with a limited level of confidence. We introduce the PCCC algorithm, which iteratively assigns objects to clusters while accounting for the information provided on the pairs of objects. Our algorithm can include relationships as hard constraints that are guaranteed to be satisfied or as soft constraints that can be violated subject to a penalty. This flexibility distinguishes our algorithm from the state-of-the-art in which all pairwise constraints are either considered hard, or all are considered soft. Unlike existing algorithms, our algorithm scales to large-scale instances with up to 60,000 objects, 100 clusters, and millions of cannot-link constraints (which are the most challenging constraints to incorporate). We compare the PCCC algorithm with state-of-the-art approaches in an extensive computational study. Even though the PCCC algorithm is more general than the state-of-the-art approaches in its applicability, it outperforms the state-of-the-art approaches on instances with all hard constraints or all soft constraints both in terms of running time and various metrics of solution quality. The source code of the PCCC algorithm is publicly available on GitHub.
translated by 谷歌翻译
Linear partial differential equations (PDEs) are an important, widely applied class of mechanistic models, describing physical processes such as heat transfer, electromagnetism, and wave propagation. In practice, specialized numerical methods based on discretization are used to solve PDEs. They generally use an estimate of the unknown model parameters and, if available, physical measurements for initialization. Such solvers are often embedded into larger scientific models or analyses with a downstream application such that error quantification plays a key role. However, by entirely ignoring parameter and measurement uncertainty, classical PDE solvers may fail to produce consistent estimates of their inherent approximation error. In this work, we approach this problem in a principled fashion by interpreting solving linear PDEs as physics-informed Gaussian process (GP) regression. Our framework is based on a key generalization of a widely-applied theorem for conditioning GPs on a finite number of direct observations to observations made via an arbitrary bounded linear operator. Crucially, this probabilistic viewpoint allows to (1) quantify the inherent discretization error; (2) propagate uncertainty about the model parameters to the solution; and (3) condition on noisy measurements. Demonstrating the strength of this formulation, we prove that it strictly generalizes methods of weighted residuals, a central class of PDE solvers including collocation, finite volume, pseudospectral, and (generalized) Galerkin methods such as finite element and spectral methods. This class can thus be directly equipped with a structured error estimate and the capability to incorporate uncertain model parameters and observations. In summary, our results enable the seamless integration of mechanistic models as modular building blocks into probabilistic models.
translated by 谷歌翻译