估算干预措施对患者结果的影响是个性化医学的关键方面之一。他们的推断经常受到训练数据仅包括给药治疗的结果,而不是用于替代治疗(所谓的反事实结果)。基于观察数据的这种情况,即〜对于连续和二进制结果变量,不适用干预的数据,建议了几种方法。然而,患者结果通常以时间对次的数据记录,如果在观察期内未发生事件,则包括右审查的事件时间。尽管他们的重要性巨大,时间令人难度的数据很少用于治疗优化。我们建议一种名为Bites的方法(用于存活数据的平衡个体治疗效果),其将特定的半导体Cox损耗与治疗平衡的深神经网络相结合;即,我们使用积分概率度量(IPM)正常化治疗和未治疗的患者之间的差异。我们在仿真研究中展示了这种方法优于现有技术。此外,我们在应用于乳腺癌患者队列的应用中证明可以基于六个常规参数进行激素治疗。我们成功验证了独立的队列中的这一发现。提供叮咬作为易于使用的Python实现。
translated by 谷歌翻译
动机:我们考虑通过过渡率矩阵$ Q $ indeClation-ratix $ q $描述动态系统的连续时间马尔可夫链,这取决于参数$ \ theta $。以时间为$ t $计算常态概率分布需要矩阵指数$ \ exp(tq)$,并推断$ \ theta $从数据需要它的衍生$ \ partial \ exp \!(tq)/ \ partial \ theta $ 。两者都在挑战,在状态空间和Q $的大小巨大时计算。当状态空间由几个交互离散变量的值的所有组合组成时,可能会发生这种情况。通常它甚至不可能储存$ q $。但是,当$ Q ​​$可以作为张量产品的总和写入时,计算$ \ exp(TQ)$可通过均匀化方法变得可行,这不需要显式存储$ q $。结果:在这里,我们提供了一种用于计算$ \ Partial \ exp \!(TQ)/ \ Partial \ Theta $,差异化均匀化方法的类似算法。我们展示了我们对流行病蔓延的随机SIR模型的算法,我们认为$ Q $可以作为张量产品的总和。我们在奥地利的Covid-19流行病的第一波浪潮中估计了每月感染和恢复利率,并在全面的贝叶斯分析中量化了他们的不确定性。可用性:实现和数据在https://github.com/spang-lab/tensir中获得。
translated by 谷歌翻译
View-dependent effects such as reflections pose a substantial challenge for image-based and neural rendering algorithms. Above all, curved reflectors are particularly hard, as they lead to highly non-linear reflection flows as the camera moves. We introduce a new point-based representation to compute Neural Point Catacaustics allowing novel-view synthesis of scenes with curved reflectors, from a set of casually-captured input photos. At the core of our method is a neural warp field that models catacaustic trajectories of reflections, so complex specular effects can be rendered using efficient point splatting in conjunction with a neural renderer. One of our key contributions is the explicit representation of reflections with a reflection point cloud which is displaced by the neural warp field, and a primary point cloud which is optimized to represent the rest of the scene. After a short manual annotation step, our approach allows interactive high-quality renderings of novel views with accurate reflection flow. Additionally, the explicit representation of reflection flow supports several forms of scene manipulation in captured scenes, such as reflection editing, cloning of specular objects, reflection tracking across views, and comfortable stereo viewing. We provide the source code and other supplemental material on https://repo-sam.inria.fr/ fungraph/neural_catacaustics/
translated by 谷歌翻译
Brain-inspired computing proposes a set of algorithmic principles that hold promise for advancing artificial intelligence. They endow systems with self learning capabilities, efficient energy usage, and high storage capacity. A core concept that lies at the heart of brain computation is sequence learning and prediction. This form of computation is essential for almost all our daily tasks such as movement generation, perception, and language. Understanding how the brain performs such a computation is not only important to advance neuroscience but also to pave the way to new technological brain-inspired applications. A previously developed spiking neural network implementation of sequence prediction and recall learns complex, high-order sequences in an unsupervised manner by local, biologically inspired plasticity rules. An emerging type of hardware that holds promise for efficiently running this type of algorithm is neuromorphic hardware. It emulates the way the brain processes information and maps neurons and synapses directly into a physical substrate. Memristive devices have been identified as potential synaptic elements in neuromorphic hardware. In particular, redox-induced resistive random access memories (ReRAM) devices stand out at many aspects. They permit scalability, are energy efficient and fast, and can implement biological plasticity rules. In this work, we study the feasibility of using ReRAM devices as a replacement of the biological synapses in the sequence learning model. We implement and simulate the model including the ReRAM plasticity using the neural simulator NEST. We investigate the effect of different device properties on the performance characteristics of the sequence learning model, and demonstrate resilience with respect to different on-off ratios, conductance resolutions, device variability, and synaptic failure.
translated by 谷歌翻译
This paper describes several improvements to a new method for signal decomposition that we recently formulated under the name of Differentiable Dictionary Search (DDS). The fundamental idea of DDS is to exploit a class of powerful deep invertible density estimators called normalizing flows, to model the dictionary in a linear decomposition method such as NMF, effectively creating a bijection between the space of dictionary elements and the associated probability space, allowing a differentiable search through the dictionary space, guided by the estimated densities. As the initial formulation was a proof of concept with some practical limitations, we will present several steps towards making it scalable, hoping to improve both the computational complexity of the method and its signal decomposition capabilities. As a testbed for experimental evaluation, we choose the task of frame-level piano transcription, where the signal is to be decomposed into sources whose activity is attributed to individual piano notes. To highlight the impact of improved non-linear modelling of sources, we compare variants of our method to a linear overcomplete NMF baseline. Experimental results will show that even in the absence of additional constraints, our models produce increasingly sparse and precise decompositions, according to two pertinent evaluation measures.
translated by 谷歌翻译
We introduce a novel way to incorporate prior information into (semi-) supervised non-negative matrix factorization, which we call differentiable dictionary search. It enables general, highly flexible and principled modelling of mixtures where non-linear sources are linearly mixed. We study its behavior on an audio decomposition task, and conduct an extensive, highly controlled study of its modelling capabilities.
translated by 谷歌翻译
与高维数据集的探索性分析(例如主成分分析(PCA))相反,邻居嵌入(NE)技术倾向于更好地保留高维数据的局部结构/拓扑。然而,保留局部结构的能力是以解释性为代价的:诸如T-分布的随机邻居嵌入(T-SNE)或统一的歧管近似和投影(UMAP)等技术没有提供拓扑结构的介绍(UMAP)(UMAP)(UMAP)(UMAP)(UMAP)(UMAP)(UMAP)。在相应的嵌入中看到的群集)结构。在这里,我们提出了基于PCA,Q-残基和Hotelling的T2贡献的化学计量学领域的不同“技巧”,并结合了新型可视化方法,从而得出了邻居嵌入的局部和全局解释。我们展示了我们的方法如何使用标准的单变量或多变量方法来识别数据点组之间的歧视性特征。
translated by 谷歌翻译
随着时间的流逝,肿瘤体积和肿瘤特征的变化是癌症治疗的重要生物标志物。在这种情况下,FDG-PET/CT扫描通常用于癌症的分期和重新分期,因为放射性标记的荧光脱氧葡萄糖在高代谢的地区进行了。不幸的是,这些具有高代谢的区域不是针对肿瘤的特异性,也可以代表正常功能器官,炎症或感染的生理吸收,在这些扫描中使详细且可靠的肿瘤分割成为一项苛刻的任务。 AUTOPET挑战赛解决了这一研究差距,该挑战提供了来自900名患者的FDG-PET/CT扫描的公共数据集,以鼓励该领域进一步改善。我们对这一挑战的贡献是由两个最先进的分割模型组成的合奏,即NN-UNET和SWIN UNETR,并以最大强度投影分类器的形式增强,该分类器的作用像是门控机制。如果它预测了病变的存在,则两种分割都是通过晚期融合方法组合的。我们的解决方案在我们的交叉验证中诊断出患有肺癌,黑色素瘤和淋巴瘤的患者的骰子得分为72.12 \%。代码:https://github.com/heiligerl/autopet_submission
translated by 谷歌翻译
背景:基于学习的深度颈部淋巴结水平(HN_LNL)自动纤维与放射疗法研究和临床治疗计划具有很高的相关性,但在学术文献中仍被研究过。方法:使用35个规划CTS的专家划分的队列用于培训NNU-NEN 3D FULLES/2D-ENEBLEN模型,用于自动分片20不同的HN_LNL。验证是在独立的测试集(n = 20)中进行的。在一项完全盲目的评估中,3位临床专家在与专家创建的轮廓的正面比较中对深度学习自动分类的质量进行了评价。对于10个病例的亚组,将观察者内的变异性与深度学习自动分量性能进行了比较。研究了Autocontour与CT片平面方向的一致性对几何精度和专家评级的影响。结果:与专家创建的轮廓相比,对CT SLICE平面调整的深度学习分割的平均盲目专家评级明显好得多(81.0 vs. 79.6,p <0.001),但没有切片平面的深度学习段的评分明显差。专家创建的轮廓(77.2 vs. 79.6,p <0.001)。深度学习分割的几何准确性与观察者内变异性(平均骰子,0.78 vs. 0.77,p = 0.064)的几何准确性无关,并且在提高水平之间的准确性方面差异(p <0.001)。与CT切片平面方向一致性的临床意义未由几何精度指标(骰子,0.78 vs. 0.78 vs. 0.78,p = 0.572)结论:我们表明可以将NNU-NENE-NET 3D-FULLRES/2D-ENEMELBEND用于HN_LNL高度准确的自动限制仅使用有限的培训数据集,该数据集非常适合在研究环境中在HN_LNL的大规模标准化自动限制。几何准确度指标只是盲人专家评级的不完善的替代品。
translated by 谷歌翻译
相关光和电子显微镜是研究细胞内部结构的强大工具。它结合了相关光(LM)和电子(EM)显微镜信息的相互益处。但是,将LM叠加到EM图像以将功能分配给结构信息的经典方法受到LM图像中可见的结构细节的巨大差异的阻碍。本文旨在研究一种优化方法,我们称之为EM引导的反卷积。它试图将荧光标记的结构自动分配给EM图像中可见的细节,以弥合两种成像模式之间的分辨率和特异性的间隙。
translated by 谷歌翻译