Over the years, Machine Learning models have been successfully employed on neuroimaging data for accurately predicting brain age. Deviations from the healthy brain aging pattern are associated to the accelerated brain aging and brain abnormalities. Hence, efficient and accurate diagnosis techniques are required for eliciting accurate brain age estimations. Several contributions have been reported in the past for this purpose, resorting to different data-driven modeling methods. Recently, deep neural networks (also referred to as deep learning) have become prevalent in manifold neuroimaging studies, including brain age estimation. In this review, we offer a comprehensive analysis of the literature related to the adoption of deep learning for brain age estimation with neuroimaging data. We detail and analyze different deep learning architectures used for this application, pausing at research works published to date quantitatively exploring their application. We also examine different brain age estimation frameworks, comparatively exposing their advantages and weaknesses. Finally, the review concludes with an outlook towards future directions that should be followed by prospective studies. The ultimate goal of this paper is to establish a common and informed reference for newcomers and experienced researchers willing to approach brain age estimation by using deep learning models
translated by 谷歌翻译
本文提出了一个新颖的像素间隔下采样网络(PID-NET),以较高的精度计算任务,以更高的精度计数任务。 PID-NET是具有编码器架构的端到端卷积神经网络(CNN)模型。像素间隔向下采样操作与最大功能操作相连,以结合稀疏和密集的特征。这解决了计数时茂密物体的轮廓凝结的局限性。使用经典分割指标(骰子,Jaccard和Hausdorff距离)以及计数指标进行评估。实验结果表明,所提出的PID-NET具有最佳的性能和潜力,可以实现密集的微小对象计数任务,该任务在数据集上具有2448个酵母单元图像在数据集上达到96.97 \%的计数精度。通过与最新的方法进行比较,例如注意U-NET,SWIN U-NET和TRANS U-NET,提出的PID-NET可以分割具有更清晰边界和较少不正确的碎屑的密集的微小物体,这表明PID网络在准确计数的任务中的巨大潜力。
translated by 谷歌翻译
实现域适应是有价值的,以将学习知识从标记为CT数据集传输到腹部多器官分段的目标未标记的MR DataSet。同时,非常希望避免目标数据集的高注重成本并保护源数据集的隐私。因此,我们提出了一种有效的无核心无监督域适应方法,用于跨型号腹部多器官分段而不访问源数据集。所提出的框架的过程包括两个阶段。在第一阶段,特征映射统计损失用于对准顶部分段网络中的源和目标特征的分布,并使用熵最小化损耗来鼓励高席位细分。从顶部分段网络输出的伪标签用于指导样式补偿网络生成类似源图像。从中间分割网络输出的伪标签用于监督所需模型的学习(底部分段网络)。在第二阶段,循环学习和像素自适应掩模细化用于进一步提高所需模型的性能。通过这种方法,我们在肝脏,肾脏,左肾肾脏和脾脏的分割中实现了令人满意的性能,骰子相似系数分别为0.884,0.891,0.864和0.911。此外,当存在目标注释数据时,所提出的方法可以很容易地扩展到情况。该性能在平均骰子相似度系数的0.888至0.922增加到0.888至0.922,靠近监督学习(0.929),只有一个标记的MR卷。
translated by 谷歌翻译
精神分裂症(SZ)是一种精神障碍,由于大脑中特定化学品的分泌,一些脑区的功能失去平衡,导致思想,行动和情绪之间缺乏协调。本研究提供了通过脑电图(EEG)信号的自动化SZ诊断的各种智能深度学习(DL)方法。将得到的结果与传统智能方法的结果进行比较。为了实施拟议的方法,已经使用了波兰华沙精神病学与神经学研究所的数据集。首先,将EEG信号分成25秒的时间框架,然后通过Z分数或标准L2标准化。在分类步骤中,考虑通过EEG信号考虑两种不同的方法进行SZ诊断。在该步骤中,首先通过传统的机器学习方法进行EEG信号的分类,例如,支持向量机,K-CORMONT邻居,决策树,NA \“IVE贝叶斯,随机森林,极其随机树木和袋装。各种提出的DL模型,即长的短期存储器(LSTMS),一维卷积网络(1D-CNNS)和1D-CNN-LSTMS。在此步骤中,实现并比较了DL模型具有不同的激活功能。在提议的DL模型中,CNN-LSTM架构具有最佳性能。在这种架构中,使用具有Z分数和L2组合标准化的Relu激活功能。所提出的CNN-LSTM模型具有达到99.25%的准确度,比该领域的大多数前研究的结果更好。值得一提的是,为了执行所有模拟,已经使用了具有k = 5的k折叠交叉验证方法。
translated by 谷歌翻译
Covid-19(2019年冠状病毒病)的爆发改变了世界。根据世界卫生组织(WHO)的说法,已确认有超过1亿个COVID案件,其中包括超过240万人死亡。早期发现该疾病非常重要,并且已证明使用医学成像,例如胸部X射线(CXR)和胸部计算机断层扫描(CCT)是一个极好的解决方案。但是,此过程要求临床医生在手动和耗时的任务中进行此操作,这在试图加快诊断加快时并不理想。在这项工作中,我们提出了一个基于概率支持向量机(SVM)的集成分类器,以识别肺炎模式,同时提供有关分类可靠性的信息。具体而言,将每个CCT扫描分为立方斑块,并且每个CCT扫描中包含的特征都通过应用核PCA提取。在合奏中使用基本分类器使我们的系统能够识别肺炎模式,无论其尺寸或位置如何。然后,根据每个单个分类的可靠性,将每个单独的贴片的决策组合成一个全局:不确定性越低,贡献越高。在实际情况下评估性能,准确度为97.86%。获得的大型性能和系统的简单性(在CCT图像中使用深度学习将导致巨大的计算成本)证明我们的建议在现实世界中的适用性。
translated by 谷歌翻译
Masked image modeling (MIM) performs strongly in pre-training large vision Transformers (ViTs). However, small models that are critical for real-world applications cannot or only marginally benefit from this pre-training approach. In this paper, we explore distillation techniques to transfer the success of large MIM-based pre-trained models to smaller ones. We systematically study different options in the distillation framework, including distilling targets, losses, input, network regularization, sequential distillation, etc, revealing that: 1) Distilling token relations is more effective than CLS token- and feature-based distillation; 2) An intermediate layer of the teacher network as target perform better than that using the last layer when the depth of the student mismatches that of the teacher; 3) Weak regularization is preferred; etc. With these findings, we achieve significant fine-tuning accuracy improvements over the scratch MIM pre-training on ImageNet-1K classification, using all the ViT-Tiny, ViT-Small, and ViT-base models, with +4.2%/+2.4%/+1.4% gains, respectively. Our TinyMIM model of base size achieves 52.2 mIoU in AE20K semantic segmentation, which is +4.1 higher than the MAE baseline. Our TinyMIM model of tiny size achieves 79.6% top-1 accuracy on ImageNet-1K image classification, which sets a new record for small vision models of the same size and computation budget. This strong performance suggests an alternative way for developing small vision Transformer models, that is, by exploring better training methods rather than introducing inductive biases into architectures as in most previous works. Code is available at https://github.com/OliverRensu/TinyMIM.
translated by 谷歌翻译
In this paper, we propose a robust 3D detector, named Cross Modal Transformer (CMT), for end-to-end 3D multi-modal detection. Without explicit view transformation, CMT takes the image and point clouds tokens as inputs and directly outputs accurate 3D bounding boxes. The spatial alignment of multi-modal tokens is performed implicitly, by encoding the 3D points into multi-modal features. The core design of CMT is quite simple while its performance is impressive. CMT obtains 73.0% NDS on nuScenes benchmark. Moreover, CMT has a strong robustness even if the LiDAR is missing. Code will be released at https://github.com/junjie18/CMT.
translated by 谷歌翻译
Dataset distillation has emerged as a prominent technique to improve data efficiency when training machine learning models. It encapsulates the knowledge from a large dataset into a smaller synthetic dataset. A model trained on this smaller distilled dataset can attain comparable performance to a model trained on the original training dataset. However, the existing dataset distillation techniques mainly aim at achieving the best trade-off between resource usage efficiency and model utility. The security risks stemming from them have not been explored. This study performs the first backdoor attack against the models trained on the data distilled by dataset distillation models in the image domain. Concretely, we inject triggers into the synthetic data during the distillation procedure rather than during the model training stage, where all previous attacks are performed. We propose two types of backdoor attacks, namely NAIVEATTACK and DOORPING. NAIVEATTACK simply adds triggers to the raw data at the initial distillation phase, while DOORPING iteratively updates the triggers during the entire distillation procedure. We conduct extensive evaluations on multiple datasets, architectures, and dataset distillation techniques. Empirical evaluation shows that NAIVEATTACK achieves decent attack success rate (ASR) scores in some cases, while DOORPING reaches higher ASR scores (close to 1.0) in all cases. Furthermore, we conduct a comprehensive ablation study to analyze the factors that may affect the attack performance. Finally, we evaluate multiple defense mechanisms against our backdoor attacks and show that our attacks can practically circumvent these defense mechanisms.
translated by 谷歌翻译
Blind image quality assessment (BIQA) remains challenging due to the diversity of distortion and image content variation, which complicate the distortion patterns crossing different scales and aggravate the difficulty of the regression problem for BIQA. However, existing BIQA methods often fail to consider multi-scale distortion patterns and image content, and little research has been done on learning strategies to make the regression model produce better performance. In this paper, we propose a simple yet effective Progressive Multi-Task Image Quality Assessment (PMT-IQA) model, which contains a multi-scale feature extraction module (MS) and a progressive multi-task learning module (PMT), to help the model learn complex distortion patterns and better optimize the regression issue to align with the law of human learning process from easy to hard. To verify the effectiveness of the proposed PMT-IQA model, we conduct experiments on four widely used public datasets, and the experimental results indicate that the performance of PMT-IQA is superior to the comparison approaches, and both MS and PMT modules improve the model's performance.
translated by 谷歌翻译
Automatic music generation with artificial intelligence typically requires a large amount of data which is hard to obtain for many less common genres and musical instruments. To tackle this issue, we present ongoing work and preliminary findings on the possibility for deep models to transfer knowledge from language to music, by finetuning large language models pre-trained on a massive text corpus on only hundreds of MIDI files of drum performances. We show that by doing so, one of the largest, state-of-the-art models (GPT3) is capable of generating reasonable drum grooves, while models that are not pre-trained (Transformer) shows no such ability beyond naive repetition. Evaluating generated music is a challenging task, more so is evaluating drum grooves with little precedence in literature. Hence, we propose a tailored structural evaluation method and analyze drum grooves produced by GPT3 compared to those played by human professionals, exposing the strengths and weaknesses of such generation by language-to-music transfer. Our findings suggest that language-to-music transfer learning with large language models is viable and promising.
translated by 谷歌翻译