Previous work has shown the potential of deep learning to predict renal obstruction using kidney ultrasound images. However, these image-based classifiers have been trained with the goal of single-visit inference in mind. We compare methods from video action recognition (i.e. convolutional pooling, LSTM, TSM) to adapt single-visit convolutional models to handle multiple visit inference. We demonstrate that incorporating images from a patient's past hospital visits provides only a small benefit for the prediction of obstructive hydronephrosis. Therefore, inclusion of prior ultrasounds is beneficial, but prediction based on the latest ultrasound is sufficient for patient risk stratification.
translated by 谷歌翻译
Convolutional neural networks (CNNs) have been extensively applied for image recognition problems giving stateof-the-art results on recognition, detection, segmentation and retrieval. In this work we propose and evaluate several deep neural network architectures to combine image information across a video over longer time periods than previously attempted. We propose two methods capable of handling full length videos. The first method explores various convolutional temporal feature pooling architectures, examining the various design choices which need to be made when adapting a CNN for this task. The second proposed method explicitly models the video as an ordered sequence of frames. For this purpose we employ a recurrent neural network that uses Long Short-Term Memory (LSTM) cells which are connected to the output of the underlying CNN. Our best networks exhibit significant performance improvements over previously published results on the Sports 1 million dataset (73.1% vs. 60.9%) and the UCF-101 datasets with (88.6% vs. 88.0%) and without additional optical flow information (82.6% vs. 73.0%).
translated by 谷歌翻译
手术手术室(OR)为自动化和优化提供了许多机会。来自OR的各种来源的视频越来越多。医学界试图利用这些丰富的数据来开发自动化方法,以提高介入的护理,降低成本并改善整体患者的结果。因此,来自或房间摄像机的现有数据集的大小或方式限制了,因此尚不清楚哪些传感器方式最适合诸如识别视频外科手术的任务。这项研究表明,手术动作识别性能可能会根据所使用的图像方式而有所不同。我们对几种常用的传感器方式进行有条理的分析,并提出了两种改善分类性能的融合方法。这些分析是对18个腹腔镜程序的一组多视图RGB-D视频记录进行的。
translated by 谷歌翻译
电子健康记录(EHRS)在患者级别汇总了多种信息,并保留了整个时间内患者健康状况进化的轨迹代表。尽管此信息提供了背景,并且可以由医生利用以监控患者的健康并进行更准确的预后/诊断,但患者记录可以包含长期跨度的信息,这些信息与快速生成的医疗数据速率相结合,使临床决策变得更加复杂。患者轨迹建模可以通过以可扩展的方式探索现有信息来帮助,并可以通过促进预防医学实践来增强医疗保健质量。我们为建模患者轨迹提出了一种解决方案,该解决方案结合了不同类型的信息并考虑了临床数据的时间方面。该解决方案利用了两种不同的架构:一组支持灵活的输入功能集,以将患者的录取转换为密集的表示;以及在基于复发的架构中进行的第二次探索提取的入院表示,其中使用滑动窗口机制在子序列中处理患者轨迹。使用公开可用的模仿III临床数据库评估了开发的解决方案,以两种不同的临床结果,意外的患者再入院和疾病进展。获得的结果证明了第一个体系结构使用单个患者入院进行建模和诊断预测的潜力。虽然临床文本中的信息并未显示在其他现有作品中观察到的判别能力,但这可以通过微调临床模型来解释。最后,我们使用滑动窗口机制来表示基于序列的体系结构的潜力,以表示输入数据,从而获得与其他现有解决方案的可比性能。
translated by 谷歌翻译
机器学习和非接触传感器的进步使您能够在医疗保健环境中理解复杂的人类行为。特别是,已经引入了几种深度学习系统,以实现对自闭症谱系障碍(ASD)等神经发展状况的全面分析。这种情况会影响儿童的早期发育阶段,并且诊断完全依赖于观察孩子的行为和检测行为提示。但是,诊断过程是耗时的,因为它需要长期的行为观察以及专家的稀缺性。我们展示了基于区域的计算机视觉系统的效果,以帮助临床医生和父母分析孩子的行为。为此,我们采用并增强了一个数据集,用于使用在不受控制的环境中捕获的儿童的视频来分析自闭症相关的动作(例如,在各种环境中使用消费级摄像机收集的视频)。通过检测视频中的目标儿童以减少背景噪声的影响,可以预处理数据。在时间卷积模型的有效性的推动下,我们提出了能够从视频帧中提取动作功能并通过分析视频中的框架之间的关系来从视频帧中提取动作功能并分类与自闭症相关的行为。通过对功能提取和学习策略的广泛评估,我们证明了通过膨胀的3D Convnet和多阶段的时间卷积网络实现最佳性能,达到了0.83加权的F1得分,以分类三种自闭症相关的动作,超越表现优于表现现有方法。我们还通过在同一系统中采用ESNET主链来提出一个轻重量解决方案,实现0.71加权F1得分的竞争结果,并在嵌入式系统上实现潜在的部署。
translated by 谷歌翻译
肺癌是全球癌症死亡的主要原因,肺腺癌是最普遍的肺癌形式。 EGFR阳性肺腺癌已被证明对TKI治疗的反应率很高,这是肺癌分子测试的基本性质。尽管目前的指南考虑必要测试,但很大一部分患者并未常规化,导致数百万的人未接受最佳治疗肺癌。测序是EGFR突变分子测试的黄金标准,但是结果可能需要数周的时间才能回来,这在时间限制的情况下并不理想。能够快速,便宜地检测EGFR突变的替代筛查工具的开发,同时保存组织以进行测序可以帮助减少受比较治疗的患者的数量。我们提出了一种多模式方法,该方法将病理图像和临床变量整合在一起,以预测EGFR突变状态,迄今为止最大的临床队列中的AUC为84%。这样的计算模型可以以很少的额外成本进行大部分部署。它的临床应用可以减少中国接受亚最佳治疗的患者数量53.1%,在美国将高达96.6%的患者减少96.6%。
translated by 谷歌翻译
多模式融合方法旨在整合来自不同数据源的信息。与天然数据集不同,例如在视听应用中,样本由“配对”模式组成,医疗保健中的数据通常异步收集。因此,对于给定样品需要所有方式,对于临床任务而言并不现实,并且在训练过程中显着限制了数据集的大小。在本文中,我们提出了Medfuse,这是一种概念上简单但有前途的基于LSTM的融合模块,可以容纳Uni-Mododal和多模式输入。我们使用MIMIC-IV数据集中的临床时间序列数据以及Mimic-CXR中的相应的胸部X射线图像,评估了融合方法,并引入了院内死亡率预测和表型分类的新基准结果。与更复杂的多模式融合策略相比,MEDFUSE在完全配对的测试集上的差距很大。它在部分配对的测试集中还保持了强大的稳定性,其中包含带有缺少胸部X射线图像的样品。我们发布了我们的可重复性代码,并在将来对竞争模型进行评估。
translated by 谷歌翻译
Seizure type identification is essential for the treatment and management of epileptic patients. However, it is a difficult process known to be time consuming and labor intensive. Automated diagnosis systems, with the advancement of machine learning algorithms, have the potential to accelerate the classification process, alert patients, and support physicians in making quick and accurate decisions. In this paper, we present a novel multi-path seizure-type classification deep learning network (MP-SeizNet), consisting of a convolutional neural network (CNN) and a bidirectional long short-term memory neural network (Bi-LSTM) with an attention mechanism. The objective of this study was to classify specific types of seizures, including complex partial, simple partial, absence, tonic, and tonic-clonic seizures, using only electroencephalogram (EEG) data. The EEG data is fed to our proposed model in two different representations. The CNN was fed with wavelet-based features extracted from the EEG signals, while the Bi-LSTM was fed with raw EEG signals to let our MP-SeizNet jointly learns from different representations of seizure data for more accurate information learning. The proposed MP-SeizNet was evaluated using the largest available EEG epilepsy database, the Temple University Hospital EEG Seizure Corpus, TUSZ v1.5.2. We evaluated our proposed model across different patient data using three-fold cross-validation and across seizure data using five-fold cross-validation, achieving F1 scores of 87.6% and 98.1%, respectively.
translated by 谷歌翻译
The paucity of videos in current action classification datasets (UCF-101 and HMDB-51) has made it difficult to identify good video architectures, as most methods obtain similar performance on existing small-scale benchmarks. This paper re-evaluates state-of-the-art architectures in light of the new Kinetics Human Action Video dataset. Kinetics has two orders of magnitude more data, with 400 human action classes and over 400 clips per class, and is collected from realistic, challenging YouTube videos. We provide an analysis on how current architectures fare on the task of action classification on this dataset and how much performance improves on the smaller benchmark datasets after pre-training on Kinetics.We also introduce a new Two-Stream Inflated 3D Con-vNet (I3D) that is based on 2D ConvNet inflation: filters and pooling kernels of very deep image classification ConvNets are expanded into 3D, making it possible to learn seamless spatio-temporal feature extractors from video while leveraging successful ImageNet architecture designs and even their parameters. We show that, after pre-training on Kinetics, I3D models considerably improve upon the state-of-the-art in action classification, reaching 80.9% on HMDB-51 and 98.0% on UCF-101.
translated by 谷歌翻译
从视频和动态数据自动活动识别是一种重要的机器学习问题,其应用范围从机器人到智能健康。大多数现有的作品集中在确定粗动作,如跑步,登山,或切割植物,其具有相对长的持续时间。这对于那些需要细微动作中的高时间分辨率识别应用的一个重要限制。例如,在中风恢复,定量康复剂量需要区分具有亚秒持续时间的运动。我们的目标是弥合这一差距。为此,我们引入了一个大规模,多数据集,StrokeRehab,为包括标记高时间分辨率微妙的短期操作的新动作识别基准。这些短期的行为被称为功能性原语和由河段,运输,重新定位,稳定作用,和空转的。所述数据集由高品质的惯性测量单元的传感器和执行的日常生活像馈送,刷牙等的活动41中风影响的病人的视频数据的,我们表明,基于分割产生嘈杂状态的最先进的现有机型预测时,对这些数据,这往往会导致行动超量。为了解决这个问题,我们提出了高分辨率的活动识别,通过语音识别技术的启发,它是基于一个序列到序列模型,直接预测的动作序列的新方法。这种方法优于国家的最先进的电流在StrokeRehab数据集的方法,以及对标准的基准数据集50Salads,早餐,和拼图。
translated by 谷歌翻译
Pneumonia, a respiratory infection brought on by bacteria or viruses, affects a large number of people, especially in developing and impoverished countries where high levels of pollution, unclean living conditions, and overcrowding are frequently observed, along with insufficient medical infrastructure. Pleural effusion, a condition in which fluids fill the lung and complicate breathing, is brought on by pneumonia. Early detection of pneumonia is essential for ensuring curative care and boosting survival rates. The approach most usually used to diagnose pneumonia is chest X-ray imaging. The purpose of this work is to develop a method for the automatic diagnosis of bacterial and viral pneumonia in digital x-ray pictures. This article first presents the authors' technique, and then gives a comprehensive report on recent developments in the field of reliable diagnosis of pneumonia. In this study, here tuned a state-of-the-art deep convolutional neural network to classify plant diseases based on images and tested its performance. Deep learning architecture is compared empirically. VGG19, ResNet with 152v2, Resnext101, Seresnet152, Mobilenettv2, and DenseNet with 201 layers are among the architectures tested. Experiment data consists of two groups, sick and healthy X-ray pictures. To take appropriate action against plant diseases as soon as possible, rapid disease identification models are preferred. DenseNet201 has shown no overfitting or performance degradation in our experiments, and its accuracy tends to increase as the number of epochs increases. Further, DenseNet201 achieves state-of-the-art performance with a significantly a smaller number of parameters and within a reasonable computing time. This architecture outperforms the competition in terms of testing accuracy, scoring 95%. Each architecture was trained using Keras, using Theano as the backend.
translated by 谷歌翻译
创伤后应激障碍(PTSD)是一种长期衰弱的精神状况,是针对灾难性生活事件(例如军事战斗,性侵犯和自然灾害)而发展的。 PTSD的特征是过去的创伤事件,侵入性思想,噩梦,过度维护和睡眠障碍的闪回,所有这些都会影响一个人的生活,并导致相当大的社会,职业和人际关系障碍。 PTSD的诊断是由医学专业人员使用精神障碍诊断和统计手册(DSM)中定义的PTSD症状的自我评估问卷进行的。在本文中,这是我们第一次收集,注释并为公共发行准备了一个新的视频数据库,用于自动PTSD诊断,在野生数据集中称为PTSD。该数据库在采集条件下表现出“自然”和巨大的差异,面部表达,照明,聚焦,分辨率,年龄,性别,种族,遮挡和背景。除了描述数据集集合的详细信息外,我们还提供了评估野生数据集中PTSD的基于计算机视觉和机器学习方法的基准。此外,我们建议并评估基于深度学习的PTSD检测方法。提出的方法显示出非常有希望的结果。有兴趣的研究人员可以从:http://www.lissi.fr/ptsd-dataset/下载PTSD-in-wild数据集的副本
translated by 谷歌翻译
Wind power forecasting helps with the planning for the power systems by contributing to having a higher level of certainty in decision-making. Due to the randomness inherent to meteorological events (e.g., wind speeds), making highly accurate long-term predictions for wind power can be extremely difficult. One approach to remedy this challenge is to utilize weather information from multiple points across a geographical grid to obtain a holistic view of the wind patterns, along with temporal information from the previous power outputs of the wind farms. Our proposed CNN-RNN architecture combines convolutional neural networks (CNNs) and recurrent neural networks (RNNs) to extract spatial and temporal information from multi-dimensional input data to make day-ahead predictions. In this regard, our method incorporates an ultra-wide learning view, combining data from multiple numerical weather prediction models, wind farms, and geographical locations. Additionally, we experiment with global forecasting approaches to understand the impact of training the same model over the datasets obtained from multiple different wind farms, and we employ a method where spatial information extracted from convolutional layers is passed to a tree ensemble (e.g., Light Gradient Boosting Machine (LGBM)) instead of fully connected layers. The results show that our proposed CNN-RNN architecture outperforms other models such as LGBM, Extra Tree regressor and linear regression when trained globally, but fails to replicate such performance when trained individually on each farm. We also observe that passing the spatial information from CNN to LGBM improves its performance, providing further evidence of CNN's spatial feature extraction capabilities.
translated by 谷歌翻译
Intensive Care Units usually carry patients with a serious risk of mortality. Recent research has shown the ability of Machine Learning to indicate the patients' mortality risk and point physicians toward individuals with a heightened need for care. Nevertheless, healthcare data is often subject to privacy regulations and can therefore not be easily shared in order to build Centralized Machine Learning models that use the combined data of multiple hospitals. Federated Learning is a Machine Learning framework designed for data privacy that can be used to circumvent this problem. In this study, we evaluate the ability of deep Federated Learning to predict the risk of Intensive Care Unit mortality at an early stage. We compare the predictive performance of Federated, Centralized, and Local Machine Learning in terms of AUPRC, F1-score, and AUROC. Our results show that Federated Learning performs equally well as the centralized approach and is substantially better than the local approach, thus providing a viable solution for early Intensive Care Unit mortality prediction. In addition, we show that the prediction performance is higher when the patient history window is closer to discharge or death. Finally, we show that using the F1-score as an early stopping metric can stabilize and increase the performance of our approach for the task at hand.
translated by 谷歌翻译
倦怠是影响近一半医疗工作者的重大公共卫生问题。本文介绍了基于电子健康记录(EHR)活动日志的医师倦怠的第一个端到端深度学习框架,即任何EHR系统中可用的医师工作活动的数字痕迹。与仅依靠调查进行倦怠测量的先前方法相反,我们的框架直接从大规模的临床医生活动日志中了解了医师行为的深刻表示,以预测倦怠。我们提出了基于活动日志(HIPAL)的层次结构预测,该预测具有预先训练的时间依赖时间的活动嵌入机制,适用于活动日志和分层预测模型,该模型反映了临床医生活动日志的自然等级结构,并捕获了医生的演化。短期和长期水平的倦怠风险。为了利用大量未标记的活动日志,我们提出了一个半监督的框架,该框架学会了将从未标记的临床医生活动中提取的知识转移到基于HIPAL的预测模型中。从EHR收集的1500万个临床医生活动日志的实验证明了我们提出的框架在医师倦怠和培训效率方面的预测框架比最先进的方法的优势。
translated by 谷歌翻译
Predicting the health risks of patients using Electronic Health Records (EHR) has attracted considerable attention in recent years, especially with the development of deep learning techniques. Health risk refers to the probability of the occurrence of a specific health outcome for a specific patient. The predicted risks can be used to support decision-making by healthcare professionals. EHRs are structured patient journey data. Each patient journey contains a chronological set of clinical events, and within each clinical event, there is a set of clinical/medical activities. Due to variations of patient conditions and treatment needs, EHR patient journey data has an inherently high degree of missingness that contains important information affecting relationships among variables, including time. Existing deep learning-based models generate imputed values for missing values when learning the relationships. However, imputed data in EHR patient journey data may distort the clinical meaning of the original EHR patient journey data, resulting in classification bias. This paper proposes a novel end-to-end approach to modeling EHR patient journey data with Integrated Convolutional and Recurrent Neural Networks. Our model can capture both long- and short-term temporal patterns within each patient journey and effectively handle the high degree of missingness in EHR data without any imputation data generation. Extensive experimental results using the proposed model on two real-world datasets demonstrate robust performance as well as superior prediction accuracy compared to existing state-of-the-art imputation-based prediction methods.
translated by 谷歌翻译
近年来,美国西部野蛮火灾的大小和频率显着增加。在高火灾日,小火点火可以迅速增长并失控。早期检测初始烟雾的火点火可以帮助响应在难以管理之前对这种火灾进行响应。过去的野火烟雾检测的深入学习方法遭受了小型或不可靠的数据集,使得难以将性能推断为现实世界的情景。在这项工作中,我们展示了火点火图书馆(Figlib),这是一个近25,000个标记的野火烟雾图像的公共数据集,从南加州部署的固定视图相机看。我们还介绍了Smokeynet,一种新的深度学习架构,使用相机图像的时空信息,用于实时野火烟雾检测。在迪拉布数据集上培训时,SmokeyNet优于相当的基线和竞争对手的人类性能。我们希望Figlib数据集和Smokynet架构的可用性将激励进一步研究野火烟雾检测的深度学习方法,导致自动化通知系统,减少野火响应的时间。
translated by 谷歌翻译
基于电子健康记录(EHR)的健康预测建筑模型已成为一个活跃的研究领域。 EHR患者旅程数据由患者定期的临床事件/患者访问组成。大多数现有研究的重点是建模访问之间的长期依赖性,而无需明确考虑连续访问之间的短期相关性,在这种情况下,将不规则的时间间隔(并入为辅助信息)被送入健康预测模型中以捕获患者期间的潜在渐进模式。 。我们提出了一个具有四个模块的新型深神经网络,以考虑各种变量对健康预测的贡献:i)堆叠的注意力模块在每个患者旅程中加强了临床事件中的深层语义,并产生访问嵌入,ii)短 - 术语时间关注模块模型在连续访问嵌入之间的短期相关性,同时捕获这些访问嵌入中时间间隔的影响,iii)长期时间关注模块模型的长期依赖模型,同时捕获时间间隔内的时间间隔的影响这些访问嵌入,iv),最后,耦合的注意模块适应了短期时间关注和长期时间注意模块的输出,以做出健康预测。对模拟III的实验结果表明,与现有的最新方法相比,我们的模型的预测准确性以及该方法的可解释性和鲁棒性。此外,我们发现建模短期相关性有助于局部先验的产生,从而改善了患者旅行的预测性建模。
translated by 谷歌翻译
Recent applications of Convolutional Neural Networks (ConvNets) for human action recognition in videos have proposed different solutions for incorporating the appearance and motion information. We study a number of ways of fusing ConvNet towers both spatially and temporally in order to best take advantage of this spatio-temporal information. We make the following findings: (i) that rather than fusing at the softmax layer, a spatial and temporal network can be fused at a convolution layer without loss of performance, but with a substantial saving in parameters;(ii) that it is better to fuse such networks spatially at the last convolutional layer than earlier, and that additionally fusing at the class prediction layer can boost accuracy; finally (iii) that pooling of abstract convolutional features over spatiotemporal neighbourhoods further boosts performance. Based on these studies we propose a new ConvNet architecture for spatiotemporal fusion of video snippets, and evaluate its performance on standard benchmarks where this architecture achieves stateof-the-art results. Our code and models are available at http://www.robots.ox.ac.uk/ vgg/software/two stream action
translated by 谷歌翻译
在视频中,人类的行为是三维(3D)信号。这些视频研究了人类行为的时空知识。使用3D卷积神经网络(CNN)研究了有希望的能力。 3D CNN尚未在静止照片中为其建立良好的二维(2D)等效物获得高输出。董事会3D卷积记忆和时空融合面部训练难以防止3D CNN完成非凡的评估。在本文中,我们实施了混合深度学习体系结构,该体系结构结合了Stip和3D CNN功能,以有效地增强3D视频的性能。实施后,在每个时空融合圈中进行训练的较详细和更深的图表。训练模型在处理模型的复杂评估后进一步增强了结果。视频分类模型在此实现模型中使用。引入了使用深度学习的多媒体数据分类的智能3D网络协议,以进一步了解人类努力中的时空关联。在实施结果时,著名的数据集(即UCF101)评估了提出的混合技术的性能。结果击败了提出的混合技术,该混合动力技术基本上超过了最初的3D CNN。将结果与文献的最新框架进行比较,以识别UCF101的行动识别,准确度为95%。
translated by 谷歌翻译