超声检查的胎儿生长评估是基于一些生物特征测量,这些测量是手动进行并相对于预期的妊娠年龄进行的。可靠的生物特征估计取决于标准超声平面中地标的精确检测。手动注释可能是耗时的和依赖操作员的任务,并且可能导致高测量可变性。现有的自动胎儿生物特征法的方法依赖于初始自动胎儿结构分割,然后是几何标记检测。但是,分割注释是耗时的,可能是不准确的,具有里程碑意义的检测需要开发特定于测量的几何方法。本文描述了Biometrynet,这是一个克服这些局限性的胎儿生物特征估计的端到端地标回归框架。它包括一种新型的动态定向测定(DOD)方法,用于在网络训练过程中执行测量特定方向的一致性。 DOD可降低网络训练中的变异性,提高标志性的定位精度,从而产生准确且健壮的生物特征测量。为了验证我们的方法,我们组装了一个来自1,829名受试者的3,398张超声图像的数据集,这些受试者在三个具有七个不同超声设备的临床部位收购。在两个独立数据集上的三个不同生物识别测量值的比较和交叉验证表明,生物元网络是稳健的,并且产生准确的测量结果,其误差低于临床上允许的误差,优于其他现有的自动化生物测定估计方法。代码可从https://github.com/netanellavisdris/fetalbiometry获得。
translated by 谷歌翻译
为了开发直肠癌的自动化工作流程,三维形成式放射治疗计划,结合了深度学习(DL)孔径预测和前向规划算法。我们设计了一种算法来自动化临床工作流程,以使用现场场地进行计划。对555名患者进行了训练,验证和测试DL模型,以自动生成一级和增强场的光圈形状。网络输入是数字重建的X射线照相,总肿瘤体积(GTV)和Nodal GTV。一名医师以5分制(> 3个可以接受)为20名患者的每个孔径为每个孔径评分。然后开发了一种计划算法,以使用楔形和子场的组合创建均匀剂量。该算法迭代识别热点卷,创建子字段并在没有用户干预的情况下优化光束重量。使用具有不同设置的临床光圈对20例患者进行了测试,并由医生评分结果计划(4例计划/患者)。端到端的工作流程通过医生对39名使用DL生成的孔径和计划算法进行了测试和评分。预测的孔的骰子得分分别为0.95、0.94和0.90,分别为侧面,外侧和升压场。 100%,95%和87.5%的后侧,外侧和升压孔分别为临床上可接受。在85%和50%的患者中,楔形计划和非界定计划在临床上是可以接受的。最终计划的热点剂量百分比从121%($ \ $ 14%)降低到处方剂量的109%($ \ pm $ 5%)。自动生成的光圈和优化现场计划的综合端到端工作流程为38/39(97%)的患者提供了可接受的计划。我们已经成功地自动化了临床工作流程,以为我们的机构生成放射疗法计划。
translated by 谷歌翻译
几个世纪以来,科学家一直观察到自然要了解支配物理世界的法律。将观察变成身体理解的传统过程很慢。构建和测试不完善的模型以解释数据中的关系。强大的新算法可以使计算机通过观察图像和视频来学习物理。受这个想法的启发,而不是使用物理量训练机器学习模型,我们使用了图像,即像素信息。对于这项工作和概念证明,感兴趣的物理学是风向的空间模式。这些现象包括风水沙丘和火山灰沉积,野火烟雾和空气污染羽状的特征。我们使用空间沉积模式的计算机模型仿真来近似假设成像设备的图像,其输出为红色,绿色和蓝色(RGB)颜色图像,通道值范围为0到255。在本文中,我们探索深度卷积神经网络基于基于风向的空间模式的关系,通常在地球科学中发生,并降低其尺寸。使用编码器降低数据维度大小,可以训练将地理和气象标量输入数量连接到编码空间的深层,完全连接的神经网络模型。一旦实现了这一目标,使用解码器重建了完整的空间模式。我们在污染源的空间沉积图像上证明了这种方法,其中编码器将维度压缩到原始大小的0.02%,并且测试数据上的完整预测模型性能的精度为92%。
translated by 谷歌翻译
The study of signatures of aging in terms of genomic biomarkers can be uniquely helpful in understanding the mechanisms of aging and developing models to accurately predict the age. Prior studies have employed gene expression and DNA methylation data aiming at accurate prediction of age. In this line, we propose a new framework for human age estimation using information from human dermal fibroblast gene expression data. First, we propose a new spatial representation as well as a data augmentation approach for gene expression data. Next in order to predict the age, we design an architecture of neural network and apply it to this new representation of the original and augmented data, as an ensemble classification approach. Our experimental results suggest the superiority of the proposed framework over state-of-the-art age estimation methods using DNA methylation and gene expression data.
translated by 谷歌翻译
使用深度自动化器来编码地震波形特征的想法,然后在不同的地震应用中使用它们是吸引人的。在本文中,我们设计了测试,以评估使用AutoEncoders作为不同地震应用的特征提取器的这种想法,例如事件辨别(即,地震与噪声波形,地震与爆炸波形和相位拣选)。这些测试涉及在大量地震波形上训练AutoEncoder,无论是均匀的还是超越,然后使用培训的编码器作为具有后续应用层的特征提取器(完全连接层,或卷积层加上完全连接的层)做出决定。通过将这些新设计模型的性能与从头开始培训的基线模型进行比较,我们得出结论,AutoEncoder特征提取器方法可以在某些条件下执行良好,例如当目标问题需要与AutoEncoder编码的功能类似,何时有相对少量的培训数据,并且当使用某些模型结构和培训策略时。在所有这些测试中最佳工作的模型结构是具有卷积层和完全连接的层的过度普遍的AutoEncoder,以进行估计。
translated by 谷歌翻译
The recent increase in public and academic interest in preserving biodiversity has led to the growth of the field of conservation technology. This field involves designing and constructing tools that utilize technology to aid in the conservation of wildlife. In this article, we will use case studies to demonstrate the importance of designing conservation tools with human-wildlife interaction in mind and provide a framework for creating successful tools. These case studies include a range of complexities, from simple cat collars to machine learning and game theory methodologies. Our goal is to introduce and inform current and future researchers in the field of conservation technology and provide references for educating the next generation of conservation technologists. Conservation technology not only has the potential to benefit biodiversity but also has broader impacts on fields such as sustainability and environmental protection. By using innovative technologies to address conservation challenges, we can find more effective and efficient solutions to protect and preserve our planet's resources.
translated by 谷歌翻译
We present the interpretable meta neural ordinary differential equation (iMODE) method to rapidly learn generalizable (i.e., not parameter-specific) dynamics from trajectories of multiple dynamical systems that vary in their physical parameters. The iMODE method learns meta-knowledge, the functional variations of the force field of dynamical system instances without knowing the physical parameters, by adopting a bi-level optimization framework: an outer level capturing the common force field form among studied dynamical system instances and an inner level adapting to individual system instances. A priori physical knowledge can be conveniently embedded in the neural network architecture as inductive bias, such as conservative force field and Euclidean symmetry. With the learned meta-knowledge, iMODE can model an unseen system within seconds, and inversely reveal knowledge on the physical parameters of a system, or as a Neural Gauge to "measure" the physical parameters of an unseen system with observed trajectories. We test the validity of the iMODE method on bistable, double pendulum, Van der Pol, Slinky, and reaction-diffusion systems.
translated by 谷歌翻译
While the brain connectivity network can inform the understanding and diagnosis of developmental dyslexia, its cause-effect relationships have not yet enough been examined. Employing electroencephalography signals and band-limited white noise stimulus at 4.8 Hz (prosodic-syllabic frequency), we measure the phase Granger causalities among channels to identify differences between dyslexic learners and controls, thereby proposing a method to calculate directional connectivity. As causal relationships run in both directions, we explore three scenarios, namely channels' activity as sources, as sinks, and in total. Our proposed method can be used for both classification and exploratory analysis. In all scenarios, we find confirmation of the established right-lateralized Theta sampling network anomaly, in line with the temporal sampling framework's assumption of oscillatory differences in the Theta and Gamma bands. Further, we show that this anomaly primarily occurs in the causal relationships of channels acting as sinks, where it is significantly more pronounced than when only total activity is observed. In the sink scenario, our classifier obtains 0.84 and 0.88 accuracy and 0.87 and 0.93 AUC for the Theta and Gamma bands, respectively.
translated by 谷歌翻译
Variational autoencoders model high-dimensional data by positing low-dimensional latent variables that are mapped through a flexible distribution parametrized by a neural network. Unfortunately, variational autoencoders often suffer from posterior collapse: the posterior of the latent variables is equal to its prior, rendering the variational autoencoder useless as a means to produce meaningful representations. Existing approaches to posterior collapse often attribute it to the use of neural networks or optimization issues due to variational approximation. In this paper, we consider posterior collapse as a problem of latent variable non-identifiability. We prove that the posterior collapses if and only if the latent variables are non-identifiable in the generative model. This fact implies that posterior collapse is not a phenomenon specific to the use of flexible distributions or approximate inference. Rather, it can occur in classical probabilistic models even with exact inference, which we also demonstrate. Based on these results, we propose a class of latent-identifiable variational autoencoders, deep generative models which enforce identifiability without sacrificing flexibility. This model class resolves the problem of latent variable non-identifiability by leveraging bijective Brenier maps and parameterizing them with input convex neural networks, without special variational inference objectives or optimization tricks. Across synthetic and real datasets, latent-identifiable variational autoencoders outperform existing methods in mitigating posterior collapse and providing meaningful representations of the data.
translated by 谷歌翻译
There are multiple scales of abstraction from which we can describe the same image, depending on whether we are focusing on fine-grained details or a more global attribute of the image. In brain mapping, learning to automatically parse images to build representations of both small-scale features (e.g., the presence of cells or blood vessels) and global properties of an image (e.g., which brain region the image comes from) is a crucial and open challenge. However, most existing datasets and benchmarks for neuroanatomy consider only a single downstream task at a time. To bridge this gap, we introduce a new dataset, annotations, and multiple downstream tasks that provide diverse ways to readout information about brain structure and architecture from the same image. Our multi-task neuroimaging benchmark (MTNeuro) is built on volumetric, micrometer-resolution X-ray microtomography images spanning a large thalamocortical section of mouse brain, encompassing multiple cortical and subcortical regions. We generated a number of different prediction challenges and evaluated several supervised and self-supervised models for brain-region prediction and pixel-level semantic segmentation of microstructures. Our experiments not only highlight the rich heterogeneity of this dataset, but also provide insights into how self-supervised approaches can be used to learn representations that capture multiple attributes of a single image and perform well on a variety of downstream tasks. Datasets, code, and pre-trained baseline models are provided at: https://mtneuro.github.io/ .
translated by 谷歌翻译