Automatic diabetic retinopathy (DR) grading based on fundus photography has been widely explored to benefit the routine screening and early treatment. Existing researches generally focus on single-field fundus images, which have limited field of view for precise eye examinations. In clinical applications, ophthalmologists adopt two-field fundus photography as the dominating tool, where the information from each field (i.e.,macula-centric and optic disc-centric) is highly correlated and complementary, and benefits comprehensive decisions. However, automatic DR grading based on two-field fundus photography remains a challenging task due to the lack of publicly available datasets and effective fusion strategies. In this work, we first construct a new benchmark dataset (DRTiD) for DR grading, consisting of 3,100 two-field fundus images. To the best of our knowledge, it is the largest public DR dataset with diverse and high-quality two-field images. Then, we propose a novel DR grading approach, namely Cross-Field Transformer (CrossFiT), to capture the correspondence between two fields as well as the long-range spatial correlations within each field. Considering the inherent two-field geometric constraints, we particularly define aligned position embeddings to preserve relative consistent position in fundus. Besides, we perform masked cross-field attention during interaction to flter the noisy relations between fields. Extensive experiments on our DRTiD dataset and a public DeepDRiD dataset demonstrate the effectiveness of our CrossFiT network. The new dataset and the source code of CrossFiT will be publicly available at https://github.com/FDU-VTS/DRTiD.
translated by 谷歌翻译
With the rapid development of artificial intelligence (AI) in medical image processing, deep learning in color fundus photography (CFP) analysis is also evolving. Although there are some open-source, labeled datasets of CFPs in the ophthalmology community, large-scale datasets for screening only have labels of disease categories, and datasets with annotations of fundus structures are usually small in size. In addition, labeling standards are not uniform across datasets, and there is no clear information on the acquisition device. Here we release a multi-annotation, multi-quality, and multi-device color fundus image dataset for glaucoma analysis on an original challenge -- Retinal Fundus Glaucoma Challenge 2nd Edition (REFUGE2). The REFUGE2 dataset contains 2000 color fundus images with annotations of glaucoma classification, optic disc/cup segmentation, as well as fovea localization. Meanwhile, the REFUGE2 challenge sets three sub-tasks of automatic glaucoma diagnosis and fundus structure analysis and provides an online evaluation framework. Based on the characteristics of multi-device and multi-quality data, some methods with strong generalizations are provided in the challenge to make the predictions more robust. This shows that REFUGE2 brings attention to the characteristics of real-world multi-domain data, bridging the gap between scientific research and clinical application.
translated by 谷歌翻译
对医学图像的器官或病变的准确分割对于可靠的疾病和器官形态计量学的可靠诊断至关重要。近年来,卷积编码器解码器解决方案在自动医疗图像分割领域取得了重大进展。由于卷积操作中的固有偏见,先前的模型主要集中在相邻像素形成的局部视觉提示上,但无法完全对远程上下文依赖性进行建模。在本文中,我们提出了一个新型的基于变压器的注意力指导网络,称为Transattunet,其中多层引导注意力和多尺度跳过连接旨在共同增强语义分割体系结构的性能。受到变压器的启发,具有变压器自我注意力(TSA)和全球空间注意力(GSA)的自我意识注意(SAA)被纳入Transattunet中,以有效地学习编码器特征之间的非本地相互作用。此外,我们还使用解码器块之间的其他多尺度跳过连接来汇总具有不同语义尺度的上采样功能。这样,多尺度上下文信息的表示能力就可以增强以产生判别特征。从这些互补组件中受益,拟议的Transattunet可以有效地减轻卷积层堆叠和连续采样操作引起的细节损失,最终提高医学图像的细分质量。来自不同成像方式的多个医疗图像分割数据集进行了广泛的实验表明,所提出的方法始终优于最先进的基线。我们的代码和预培训模型可在以下网址找到:https://github.com/yishuliu/transattunet。
translated by 谷歌翻译
骨质疏松症是一种常见的慢性代谢骨病,通常是由于对骨矿物密度(BMD)检查有限的有限获得而被诊断和妥善治疗,例如。通过双能X射线吸收测定法(DXA)。在本文中,我们提出了一种方法来预测来自胸X射线(CXR)的BMD,最常见的和低成本的医学成像考试之一。我们的方法首先自动检测来自CXR的局部和全球骨骼结构的感兴趣区域(ROI)。然后,开发了一种具有变压器编码器的多ROI深模型,以利用胸部X射线图像中的本地和全局信息以进行准确的BMD估计。我们的方法在13719 CXR患者病例中进行评估,并通过金标准DXA测量其实际BMD评分。该模型预测的BMD与地面真理(Pearson相关系数0.889腰腰1)具有强烈的相关性。当施用骨质疏松症筛查时,它实现了高分类性能(腰腰1的AUC 0.963)。作为现场使用CXR扫描预测BMD的第一次努力,所提出的算法在早期骨质疏松症筛查和公共卫生促进中具有很强的潜力。
translated by 谷歌翻译
Diabetic Retinopathy (DR) is a leading cause of vision loss in the world, and early DR detection is necessary to prevent vision loss and support an appropriate treatment. In this work, we leverage interactive machine learning and introduce a joint learning framework, termed DRG-Net, to effectively learn both disease grading and multi-lesion segmentation. Our DRG-Net consists of two modules: (i) DRG-AI-System to classify DR Grading, localize lesion areas, and provide visual explanations; (ii) DRG-Expert-Interaction to receive feedback from user-expert and improve the DRG-AI-System. To deal with sparse data, we utilize transfer learning mechanisms to extract invariant feature representations by using Wasserstein distance and adversarial learning-based entropy minimization. Besides, we propose a novel attention strategy at both low- and high-level features to automatically select the most significant lesion information and provide explainable properties. In terms of human interaction, we further develop DRG-Net as a tool that enables expert users to correct the system's predictions, which may then be used to update the system as a whole. Moreover, thanks to the attention mechanism and loss functions constraint between lesion features and classification features, our approach can be robust given a certain level of noise in the feedback of users. We have benchmarked DRG-Net on the two largest DR datasets, i.e., IDRID and FGADR, and compared it to various state-of-the-art deep learning networks. In addition to outperforming other SOTA approaches, DRG-Net is effectively updated using user feedback, even in a weakly-supervised manner.
translated by 谷歌翻译
Camouflaged objects are seamlessly blended in with their surroundings, which brings a challenging detection task in computer vision. Optimizing a convolutional neural network (CNN) for camouflaged object detection (COD) tends to activate local discriminative regions while ignoring complete object extent, causing the partial activation issue which inevitably leads to missing or redundant regions of objects. In this paper, we argue that partial activation is caused by the intrinsic characteristics of CNN, where the convolution operations produce local receptive fields and experience difficulty to capture long-range feature dependency among image regions. In order to obtain feature maps that could activate full object extent, keeping the segmental results from being overwhelmed by noisy features, a novel framework termed Cross-Model Detail Querying network (DQnet) is proposed. It reasons the relations between long-range-aware representations and multi-scale local details to make the enhanced representation fully highlight the object regions and eliminate noise on non-object regions. Specifically, a vanilla ViT pretrained with self-supervised learning (SSL) is employed to model long-range dependencies among image regions. A ResNet is employed to enable learning fine-grained spatial local details in multiple scales. Then, to effectively retrieve object-related details, a Relation-Based Querying (RBQ) module is proposed to explore window-based interactions between the global representations and the multi-scale local details. Extensive experiments are conducted on the widely used COD datasets and show that our DQnet outperforms the current state-of-the-arts.
translated by 谷歌翻译
作为新一代神经体系结构的变形金刚在自然语言处理和计算机视觉方面表现出色。但是,现有的视觉变形金刚努力使用有限的医学数据学习,并且无法概括各种医学图像任务。为了应对这些挑战,我们将Medformer作为数据量表变压器呈现为可推广的医学图像分割。关键设计结合了理想的电感偏差,线性复杂性的层次建模以及以空间和语义全局方式以线性复杂性的关注以及多尺度特征融合。 Medformer可以在不预训练的情况下学习微小至大规模的数据。广泛的实验表明,Medformer作为一般分割主链的潜力,在三个具有多种模式(例如CT和MRI)和多样化的医学靶标(例如,健康器官,疾病,疾病组织和肿瘤)的三个公共数据集上优于CNN和视觉变压器。我们将模型和评估管道公开可用,为促进广泛的下游临床应用提供固体基线和无偏比较。
translated by 谷歌翻译
快捷方式学习对深度学习模型很常见,但导致了退化的特征表示形式,因此危害了该模型的可推广性和解释性。但是,在广泛使用的视觉变压器框架中的快捷方式学习在很大程度上是未知的。同时,引入特定领域的知识是纠正捷径的主要方法,捷径为背景相关因素。例如,在医学成像领域中,放射科医生的眼睛凝视数据是一种有效的人类视觉先验知识,具有指导深度学习模型的巨大潜力,可以专注于有意义的前景区域。但是,获得眼睛凝视数据是时必的,劳动密集型的,有时甚至是不切实际的。在这项工作中,我们提出了一种新颖而有效的显着性视觉变压器(SGT)模型,以在没有眼神数据的情况下在VIT中纠正快捷方式学习。具体而言,采用计算视觉显着性模型来预测输入图像样本的显着性图。然后,显着图用于散布最有用的图像贴片。在拟议的中士中,图像贴片之间的自我注意力仅集中于蒸馏的信息。考虑到这种蒸馏操作可能会导致全局信息丢失,我们在最后一个编码器层中进一步介绍了一个残留的连接,该连接捕获了所有图像贴片中的自我注意力。四个独立公共数据集的实验结果表明,我们的SGT框架可以有效地学习和利用人类的先验知识,而无需眼睛凝视数据,并且比基线更好。同时,它成功地纠正了有害的快捷方式学习并显着提高了VIT模型的解释性,证明了传递人类先验知识在纠正快捷方式学习方面传递人类先验知识的承诺
translated by 谷歌翻译
变压器是一种基于关注的编码器解码器架构,彻底改变了自然语言处理领域。灵感来自这一重大成就,最近在将变形式架构调整到计算机视觉(CV)领域的一些开创性作品,这已经证明了他们对各种简历任务的有效性。依靠竞争力的建模能力,与现代卷积神经网络相比在本文中,我们已经为三百不同的视觉变压器进行了全面的审查,用于三个基本的CV任务(分类,检测和分割),提出了根据其动机,结构和使用情况组织这些方法的分类。 。由于培训设置和面向任务的差异,我们还在不同的配置上进行了评估了这些方法,以便于易于和直观的比较而不是各种基准。此外,我们已经揭示了一系列必不可少的,但可能使变压器能够从众多架构中脱颖而出,例如松弛的高级语义嵌入,以弥合视觉和顺序变压器之间的差距。最后,提出了三个未来的未来研究方向进行进一步投资。
translated by 谷歌翻译
早期发现视网膜疾病是预防患者部分或永久失明的最重要手段之一。在这项研究中,提出了一种新型的多标签分类系统,用于使用从各种来源收集的眼底图像来检测多种视网膜疾病。首先,使用许多公开可用的数据集来构建一个新的多标签视网膜疾病数据集,即梅里德数据集。接下来,应用了一系列后处理步骤,以确保图像数据的质量和数据集中存在的疾病范围。在眼底多标签疾病分类中,首次通过大量实验优化的基于变压器的模型用于图像分析和决策。进行了许多实验以优化所提出的系统的配置。结果表明,在疾病检测和疾病分类方面,该方法的性能比在同一任务上的最先进作品要好7.9%和8.1%。获得的结果进一步支持了基于变压器的架构在医学成像领域的潜在应用。
translated by 谷歌翻译
Ultra-high resolution image segmentation has raised increasing interests in recent years due to its realistic applications. In this paper, we innovate the widely used high-resolution image segmentation pipeline, in which an ultra-high resolution image is partitioned into regular patches for local segmentation and then the local results are merged into a high-resolution semantic mask. In particular, we introduce a novel locality-aware context fusion based segmentation model to process local patches, where the relevance between local patch and its various contexts are jointly and complementarily utilized to handle the semantic regions with large variations. Additionally, we present the alternating local enhancement module that restricts the negative impact of redundant information introduced from the contexts, and thus is endowed with the ability of fixing the locality-aware features to produce refined results. Furthermore, in comprehensive experiments, we demonstrate that our model outperforms other state-of-the-art methods in public benchmarks. Our released codes are available at: https://github.com/liqiokkk/FCtL.
translated by 谷歌翻译
变压器跟踪器最近取得了令人印象深刻的进步,注意力机制起着重要作用。但是,注意机制的独立相关计算可能导致嘈杂和模棱两可的注意力重量,从而抑制了进一步的性能改善。为了解决这个问题,我们提出了注意力(AIA)模块,该模块通过在所有相关向量之间寻求共识来增强适当的相关性并抑制错误的相关性。我们的AIA模块可以很容易地应用于自我注意解区和交叉注意区块,以促进特征聚集和信息传播以进行视觉跟踪。此外,我们通过引入有效的功能重复使用和目标背景嵌入来充分利用时间参考,提出了一个流线型的变压器跟踪框架,称为AIATRACK。实验表明,我们的跟踪器以实时速度运行时在六个跟踪基准测试中实现最先进的性能。
translated by 谷歌翻译
变形金刚占据了自然语言处理领域,最近影响了计算机视觉区域。在医学图像分析领域中,变压器也已成功应用于全栈临床应用,包括图像合成/重建,注册,分割,检测和诊断。我们的论文旨在促进变压器在医学图像分析领域的认识和应用。具体而言,我们首先概述了内置在变压器和其他基本组件中的注意机制的核心概念。其次,我们回顾了针对医疗图像应用程序量身定制的各种变压器体系结构,并讨论其局限性。在这篇综述中,我们调查了围绕在不同学习范式中使用变压器,提高模型效率及其与其他技术的耦合的关键挑战。我们希望这篇评论可以为读者提供医学图像分析领域的读者的全面图片。
translated by 谷歌翻译
In recent years, the Transformer architecture has shown its superiority in the video-based person re-identification task. Inspired by video representation learning, these methods mainly focus on designing modules to extract informative spatial and temporal features. However, they are still limited in extracting local attributes and global identity information, which are critical for the person re-identification task. In this paper, we propose a novel Multi-Stage Spatial-Temporal Aggregation Transformer (MSTAT) with two novel designed proxy embedding modules to address the above issue. Specifically, MSTAT consists of three stages to encode the attribute-associated, the identity-associated, and the attribute-identity-associated information from the video clips, respectively, achieving the holistic perception of the input person. We combine the outputs of all the stages for the final identification. In practice, to save the computational cost, the Spatial-Temporal Aggregation (STA) modules are first adopted in each stage to conduct the self-attention operations along the spatial and temporal dimensions separately. We further introduce the Attribute-Aware and Identity-Aware Proxy embedding modules (AAP and IAP) to extract the informative and discriminative feature representations at different stages. All of them are realized by employing newly designed self-attention operations with specific meanings. Moreover, temporal patch shuffling is also introduced to further improve the robustness of the model. Extensive experimental results demonstrate the effectiveness of the proposed modules in extracting the informative and discriminative information from the videos, and illustrate the MSTAT can achieve state-of-the-art accuracies on various standard benchmarks.
translated by 谷歌翻译
从医用试剂染色图像中分割牙齿斑块为诊断和确定随访治疗计划提供了宝贵的信息。但是,准确的牙菌斑分割是一项具有挑战性的任务,需要识别牙齿和牙齿斑块受到语义腔区域的影响(即,在牙齿和牙齿斑块之间的边界区域中存在困惑的边界)以及实例形状的复杂变化,这些变化均未完全解决。现有方法。因此,我们提出了一个语义分解网络(SDNET),该网络介绍了两个单任务分支,以分别解决牙齿和牙齿斑块的分割,并设计了其他约束,以学习每个分支的特定类别特征,从而促进语义分解并改善该类别的特征牙齿分割的性能。具体而言,SDNET以分裂方式学习了两个单独的分割分支和牙齿的牙齿,以解除它们之间的纠缠关系。指定类别的每个分支都倾向于产生准确的分割。为了帮助这两个分支更好地关注特定类别的特征,进一步提出了两个约束模块:1)通过最大化不同类别表示之间的距离来学习判别特征表示,以了解判别特征表示形式,以减少减少负面影响关于特征提取的语义腔区域; 2)结构约束模块(SCM)通过监督边界感知的几何约束提供完整的结构信息,以提供各种形状的牙菌斑。此外,我们构建了一个大规模的开源染色牙菌斑分割数据集(SDPSEG),该数据集为牙齿和牙齿提供高质量的注释。 SDPSEG数据集的实验结果显示SDNET达到了最新的性能。
translated by 谷歌翻译
胰腺癌是世界上最严重恶性的癌症之一,这种癌症迅速迅速,具有很高的死亡率。快速的现场评估(玫瑰)技术通过立即分析与现场病理学家的快速染色的细胞影析学形象来创新工作流程,这使得在这种紧压的过程中能够更快的诊断。然而,由于缺乏经验丰富的病理学家,玫瑰诊断的更广泛的扩张已经受到阻碍。为了克服这个问题,我们提出了一个混合高性能深度学习模型,以实现自动化工作流程,从而释放占据病理学家的宝贵时间。通过使用我们特定的多级混合设计将变压器块引入该字段,由卷积神经网络(CNN)产生的空间特征显着增强了变压器全球建模。转向多级空间特征作为全球关注指导,这种设计将鲁棒性与CNN的感应偏差与变压器的复杂全球建模功能相结合。收集4240朵Rose图像的数据集以评估此未开发领域的方法。所提出的多级混合变压器(MSHT)在分类精度下实现95.68%,其鲜明地高于最先进的模型。面对对可解释性的需求,MSHT以更准确的关注区域表达其对应物。结果表明,MSHT可以以前所未有的图像规模精确地区分癌症样本,奠定了部署自动决策系统的基础,并在临床实践中扩大玫瑰。代码和记录可在:https://github.com/sagizty/multi-stage-ybrid-transformer。
translated by 谷歌翻译
眼科医生已经使用眼底图像筛选和诊断眼病。然而,不同的设备和眼科医生对眼底图像的质量产生了大的变化。低质量(LQ)降级的眼底图像在临床筛查中容易导致不确定性,并且通常会增加误诊的风险。因此,真实的眼底图像恢复值得研究。不幸的是,到目前为止,这项任务尚未探索真正的临床基准。在本文中,我们研究了真正的临床眼底图像恢复问题。首先,我们建立一个临床数据集,真实的眼底(RF),包括120个低质量和高质量(HQ)图像对。然后,我们提出了一种新型的变压器的生成对抗网络(RFRMANER)来恢复临床眼底图像的实际降级。我们网络中的关键组件是基于窗口的自我关注块(WSAB),其捕获非本地自我相似性和远程依赖性。为了产生更明显的令人愉悦的结果,介绍了一种基于变压器的鉴别器。在我们的临床基准测试中的广泛实验表明,所提出的rformer显着优于最先进的(SOTA)方法。此外,诸如船舶分割和光盘/杯子检测之类的下游任务的实验表明我们所提出的rformer益处临床眼底图像分析和应用。将发布数据集,代码和模型。
translated by 谷歌翻译
通过研究视网膜生物结构的进展,可以识别眼病的存在和严重性是可行的。眼底检查是检查眼睛的生物结构和异常的诊断程序。诸如青光眼,糖尿病性视网膜病和白内障等眼科疾病是世界各地视觉障碍的主要原因。眼疾病智能识别(ODIR-5K)是研究人员用于多标签的多份多疾病分类的基准结构底面图像数据集。这项工作提出了一个歧视性内核卷积网络(DKCNET),该网络探讨了歧视区域的特征,而无需增加额外的计算成本。 DKCNET由注意力块组成,然后是挤压和激发(SE)块。注意块从主干网络中获取功能,并生成歧视性特征注意图。 SE块采用区分特征图并改善了通道相互依赖性。使用InceptionResnet骨干网络观察到DKCNET的更好性能,用于具有96.08 AUC,94.28 F1-SCORE和0.81 KAPPA得分的ODIR-5K底面图像的多标签分类。所提出的方法根据诊断关键字将通用目标标签拆分为眼对。基于这些标签,进行了过采样和不足采样以解决阶级失衡。为了检查拟议模型对培训数据的偏见,对ODIR数据集进行了训练的模型将在三个公开可用的基准数据集上进行测试。发现它在完全看不见的底面图像上也具有良好的性能。
translated by 谷歌翻译
估计每个视图中的2D人类姿势通常是校准多视图3D姿势估计的第一步。但是,2D姿势探测器的性能遭受挑战性的情况,例如闭塞和斜视角。为了解决这些挑战,以前的作品从eMipolar几何中的不同视图之间导出点对点对应关系,并利用对应关系来合并预测热插拔或特征表示。除了后预测合并/校准之外,我们引入了用于多视图3D姿势估计的变压器框架,其目的地通过将来自不同视图的信息集成信息来直接改善单个2D预测器。灵感来自先前的多模态变压器,我们设计一个统一的变压器体系结构,命名为输送,从当前视图和邻近视图中保险。此外,我们提出了eMipolar字段的概念来将3D位置信息编码到变压器模型中。由Epipolar字段引导的3D位置编码提供了一种有效的方式来编码不同视图的像素之间的对应关系。人类3.6M和滑雪姿势的实验表明,与其他融合方法相比,我们的方法更有效,并且具有一致的改进。具体而言,我们在256 x 256分辨率上只有5米参数达到人类3.6米的25.8毫米MPJPE。
translated by 谷歌翻译
With the rapid advances of image editing techniques in recent years, image manipulation detection has attracted considerable attention since the increasing security risks posed by tampered images. To address these challenges, a novel multi-scale multi-grained deep network (MSMG-Net) is proposed to automatically identify manipulated regions. In our MSMG-Net, a parallel multi-scale feature extraction structure is used to extract multi-scale features. Then the multi-grained feature learning is utilized to perceive object-level semantics relation of multi-scale features by introducing the shunted self-attention. To fuse multi-scale multi-grained features, global and local feature fusion block are designed for manipulated region segmentation by a bottom-up approach and multi-level feature aggregation block is designed for edge artifacts detection by a top-down approach. Thus, MSMG-Net can effectively perceive the object-level semantics and encode the edge artifact. Experimental results on five benchmark datasets justify the superior performance of the proposed method, outperforming state-of-the-art manipulation detection and localization methods. Extensive ablation experiments and feature visualization demonstrate the multi-scale multi-grained learning can present effective visual representations of manipulated regions. In addition, MSMG-Net shows better robustness when various post-processing methods further manipulate images.
translated by 谷歌翻译