Late-life depression (LLD) is a highly prevalent mood disorder occurring in older adults and is frequently accompanied by cognitive impairment (CI). Studies have shown that LLD may increase the risk of Alzheimer's disease (AD). However, the heterogeneity of presentation of geriatric depression suggests that multiple biological mechanisms may underlie it. Current biological research on LLD progression incorporates machine learning that combines neuroimaging data with clinical observations. There are few studies on incident cognitive diagnostic outcomes in LLD based on structural MRI (sMRI). In this paper, we describe the development of a hybrid representation learning (HRL) framework for predicting cognitive diagnosis over 5 years based on T1-weighted sMRI data. Specifically, we first extract prediction-oriented MRI features via a deep neural network, and then integrate them with handcrafted MRI features via a Transformer encoder for cognitive diagnosis prediction. Two tasks are investigated in this work, including (1) identifying cognitively normal subjects with LLD and never-depressed older healthy subjects, and (2) identifying LLD subjects who developed CI (or even AD) and those who stayed cognitively normal over five years. To the best of our knowledge, this is among the first attempts to study the complex heterogeneous progression of LLD based on task-oriented and handcrafted MRI features. We validate the proposed HRL on 294 subjects with T1-weighted MRIs from two clinically harmonized studies. Experimental results suggest that the HRL outperforms several classical machine learning and state-of-the-art deep learning methods in LLD identification and prediction tasks.
translated by 谷歌翻译
Autonomous robotic surgery has advanced significantly based on analysis of visual and temporal cues in surgical workflow, but relational cues from domain knowledge remain under investigation. Complex relations in surgical annotations can be divided into intra- and inter-relations, both valuable to autonomous systems to comprehend surgical workflows. Intra- and inter-relations describe the relevance of various categories within a particular annotation type and the relevance of different annotation types, respectively. This paper aims to systematically investigate the importance of relational cues in surgery. First, we contribute the RLLS12M dataset, a large-scale collection of robotic left lateral sectionectomy (RLLS), by curating 50 videos of 50 patients operated by 5 surgeons and annotating a hierarchical workflow, which consists of 3 inter- and 6 intra-relations, 6 steps, 15 tasks, and 38 activities represented as the triplet of 11 instruments, 8 actions, and 16 objects, totaling 2,113,510 video frames and 12,681,060 annotation entities. Correspondingly, we propose a multi-relation purification hybrid network (MURPHY), which aptly incorporates novel relation modules to augment the feature representation by purifying relational features using the intra- and inter-relations embodied in annotations. The intra-relation module leverages a R-GCN to implant visual features in different graph relations, which are aggregated using a targeted relation purification with affinity information measuring label consistency and feature similarity. The inter-relation module is motivated by attention mechanisms to regularize the influence of relational features based on the hierarchy of annotation types from the domain knowledge. Extensive experimental results on the curated RLLS dataset confirm the effectiveness of our approach, demonstrating that relations matter in surgical workflow analysis.
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Recently, the success of pre-training in text domain has been fully extended to vision, audio, and cross-modal scenarios. The proposed pre-training models of different modalities are showing a rising trend of homogeneity in their model structures, which brings the opportunity to implement different pre-training models within a uniform framework. In this paper, we present TencentPretrain, a toolkit supporting pre-training models of different modalities. The core feature of TencentPretrain is the modular design. The toolkit uniformly divides pre-training models into 5 components: embedding, encoder, target embedding, decoder, and target. As almost all of common modules are provided in each component, users can choose the desired modules from different components to build a complete pre-training model. The modular design enables users to efficiently reproduce existing pre-training models or build brand-new one. We test the toolkit on text, vision, and audio benchmarks and show that it can match the performance of the original implementations.
translated by 谷歌翻译
Non-Pharmaceutical Interventions (NPIs), such as social gathering restrictions, have shown effectiveness to slow the transmission of COVID-19 by reducing the contact of people. To support policy-makers, multiple studies have first modeled human mobility via macro indicators (e.g., average daily travel distance) and then studied the effectiveness of NPIs. In this work, we focus on mobility modeling and, from a micro perspective, aim to predict locations that will be visited by COVID-19 cases. Since NPIs generally cause economic and societal loss, such a micro perspective prediction benefits governments when they design and evaluate them. However, in real-world situations, strict privacy data protection regulations result in severe data sparsity problems (i.e., limited case and location information). To address these challenges, we formulate the micro perspective mobility modeling into computing the relevance score between a diffusion and a location, conditional on a geometric graph. we propose a model named Deep Graph Diffusion Infomax (DGDI), which jointly models variables including a geometric graph, a set of diffusions and a set of locations.To facilitate the research of COVID-19 prediction, we present two benchmarks that contain geometric graphs and location histories of COVID-19 cases. Extensive experiments on the two benchmarks show that DGDI significantly outperforms other competing methods.
translated by 谷歌翻译
最新的工业推理引擎(例如FASTRASTRANSFORMER1和TURBOTTRANSFORMER)已验证了半精度的浮点(FP16)和8位整数(INT8)量化可以极大地提高模型推断速度。但是,现有的FP16或INT8量化方法太复杂了,使用不当将大大导致性能损害。在本文中,我们开发了一个工具包,供用户轻松量化其模型以进行推理,其中提出了自适应混合精液(SAMP),以通过混合精确体系结构自动控制量化率,以平衡效率和性能。实验结果表明,我们的SAMP工具包比Pytorch和Fertransformer具有更高的速度,同时确保了所需的性能。此外,SAMP基于模块化设计,将令牌,嵌入,编码器和目标层解耦,该层允许用户处理各种下游任务,并且可以将其无缝集成到Pytorch中。
translated by 谷歌翻译
近年来,人们见证了应用上下文框架以提高对象检测作为视频对象检测的性能的趋势。现有方法通常一次汇总功能以增强功能。但是,这些方法通常缺少来自相邻帧的空间信息,并且缺乏功能聚合不足。为了解决这些问题,我们执行一种渐进式方式来引入时间信息和空间信息以进行集成增强。时间信息由时间特征聚合模型(TFAM)引入,通过在上下文框架和目标框架之间进行注意机制(即要检测到的框架)。同时,我们采用空间过渡意识模型(StAM)来传达每个上下文框架和目标框架之间的位置过渡信息。我们的PTSeformer建立在基于变压器的检测器DETR上,还遵循端到端的方式,以避免重大的后处理程序,同时在Imagenet VID数据集上获得88.1%的地图。代码可在https://github.com/hon-wong/ptseformer上找到。
translated by 谷歌翻译
相位识别在计算机辅助干预中的手术工作流程分析中起着至关重要的作用。最初建议在自然语言处理中进行顺序数据建模的变压器已成功应用于手术期识别。现有基于变压器的作品主要集中于建模注意力依赖性,而无需引入自动回归。在这项工作中,首先提出了一种自动回归手术变压器(称为ARST),用于腹腔镜视频的在线手术阶段识别,通过条件概率分布隐含地模拟了相之间的相关性。为了减少推理偏差并提高阶段的一致性,我们进一步制定了基于自动回归的一致性约束推理策略。我们对众所周知的公共数据集Cholec80进行全面验证。实验结果表明,我们的方法在定量和定性上都优于最新方法,并达到每秒66帧(FPS)的推理率。
translated by 谷歌翻译
没有标签的预处理分子表示模型是各种应用的基础。常规方法主要是处理2D分子图,并仅专注于2D任务,使其预验证的模型无法表征3D几何形状,因此对于下游3D任务有缺陷。在这项工作中,我们从完整而新颖的意义上处理了3D分子预处理。特别是,我们首先提议采用基于能量的模型作为预处理的骨干,该模型具有实现3D空间对称性的优点。然后,我们为力预测开发了节点级预处理损失,在此过程中,我们进一步利用了Riemann-Gaussian分布,以确保损失为E(3) - 不变,从而实现了更多的稳健性。此外,还利用了图形噪声量表预测任务,以进一步促进最终的性能。我们评估了从两个具有挑战性的3D基准:MD17和QM9的大规模3D数据集GEOM-QM9预测的模型。实验结果支持我们方法对当前最新预处理方法的更好疗效,并验证我们设计的有效性。
translated by 谷歌翻译
脑电图(EEG)的准确自动分析将在很大程度上有助于临床医生有效监测和诊断患有各种脑部疾病的患者。与使用标记的疾病脑电图数据进行监督的学习相比,可以训练模型以分析特定疾病但无法监测以前看不见的状态,仅基于正常脑电图的异常检测才能检测到新EEG中的任何潜在异常。与现有的异常检测策略不同,这些检测策略在模型开发过程中不考虑任何不可用的异常数据的财产,这里提出了一种面向任务的自我监督学习方法,它可以利用可用的正常脑电图和有关异常EEG的专业知识来培训更有效的EEG随后开发异常检测器的特征提取器。此外,具有较大核的特定两个分支卷积神经网络被设计为特征提取器,因此它可以更容易地提取较大规模和小规模的特征,这些特征通常出现在不可用的异常脑电图中。如三个EEG数据集所示,有效设计和训练的功能提取器已证明能够根据正常数据和未来的新EEG提取更好的特征表示,以根据正常数据和未来的异常检测来开发异常检测器。该代码可在https://github.com/irining/eeg-ad上找到。
translated by 谷歌翻译