Localizing anatomical landmarks are important tasks in medical image analysis. However, the landmarks to be localized often lack prominent visual features. Their locations are elusive and easily confused with the background, and thus precise localization highly depends on the context formed by their surrounding areas. In addition, the required precision is usually higher than segmentation and object detection tasks. Therefore, localization has its unique challenges different from segmentation or detection. In this paper, we propose a zoom-in attentive network (ZIAN) for anatomical landmark localization in ocular images. First, a coarse-to-fine, or "zoom-in" strategy is utilized to learn the contextualized features in different scales. Then, an attentive fusion module is adopted to aggregate multi-scale features, which consists of 1) a co-attention network with a multiple regions-of-interest (ROIs) scheme that learns complementary features from the multiple ROIs, 2) an attention-based fusion module which integrates the multi-ROIs features and non-ROI features. We evaluated ZIAN on two open challenge tasks, i.e., the fovea localization in fundus images and scleral spur localization in AS-OCT images. Experiments show that ZIAN achieves promising performances and outperforms state-of-the-art localization methods. The source code and trained models of ZIAN are available at https://github.com/leixiaofeng-astar/OMIA9-ZIAN.
translated by 谷歌翻译
解剖标志的本地化对于临床诊断,治疗计划和研究至关重要。在本文中,我们提出了一种新的深网络,名为特征聚合和细化网络(Farnet),用于自动检测解剖标记。为了减轻医疗领域的培训数据有限的问题,我们的网络采用了在自然图像上预先培训的深网络,因为骨干网络和几个流行的网络进行了比较。我们的FARNET还包括多尺度特征聚合模块,用于多尺度特征融合和用于高分辨率热图回归的特征精制模块。粗细的监督应用于两个模块,以方便端到端培训。我们进一步提出了一种名为指数加权中心损耗的新型损失函数,用于准确的热爱回归,这侧重于地标附近的像素的损失并抑制了远处的损失。我们的网络已经在三个公开的解剖学地标检测数据集中进行了评估,包括头部测量射线照片,手射线照片和脊柱射线照相,并在所有三个数据集上实现最先进的性能。代码可用:\ url {https://github.com/juvenileinwind/farnet}
translated by 谷歌翻译
With the rapid development of artificial intelligence (AI) in medical image processing, deep learning in color fundus photography (CFP) analysis is also evolving. Although there are some open-source, labeled datasets of CFPs in the ophthalmology community, large-scale datasets for screening only have labels of disease categories, and datasets with annotations of fundus structures are usually small in size. In addition, labeling standards are not uniform across datasets, and there is no clear information on the acquisition device. Here we release a multi-annotation, multi-quality, and multi-device color fundus image dataset for glaucoma analysis on an original challenge -- Retinal Fundus Glaucoma Challenge 2nd Edition (REFUGE2). The REFUGE2 dataset contains 2000 color fundus images with annotations of glaucoma classification, optic disc/cup segmentation, as well as fovea localization. Meanwhile, the REFUGE2 challenge sets three sub-tasks of automatic glaucoma diagnosis and fundus structure analysis and provides an online evaluation framework. Based on the characteristics of multi-device and multi-quality data, some methods with strong generalizations are provided in the challenge to make the predictions more robust. This shows that REFUGE2 brings attention to the characteristics of real-world multi-domain data, bridging the gap between scientific research and clinical application.
translated by 谷歌翻译
精确分割牙齿并识别牙科网格模型上的相应解剖标签在计算机辅助性正畸治疗中是必不可少的。手动执行这两个任务是耗时,繁琐的,更重要的是,由于患者牙齿的异常和大规模差异,高度依赖于矫正者的经验。一些基于机器学习的方法已经设计和应用于正畸场,以自动分割牙科网格(例如,口腔扫描)。相比之下,牙齿地标定位的研究数量仍然有限。本文提出了一种基于网格深度学习(称为TS-MDL)的两级框架,用于联合牙齿标签和原始内部扫描的地标识别。我们的TS-MDL首先采用端到端\ EMPH {i} MeshsegNet方法(即,现有网格孔的变体,具有改进的精度和效率),以在下采样扫描上标记每个牙齿。由分割输出引导,我们的TS-MDL进一步选择原始网格上的每个牙齿的感兴趣区域(ROI),以构造开头的光重变量(即PINTNET-REG),用于回归相应的地标热插块。我们的TS-MDL在实际的数据集上进行了评估,显示了有希望的细分和本地化性能。具体而言,TS-MDL的第一阶段中的\ EMPH {i} Meshsegnet达到了0.964 \ PM0.054 $ 0.964 \ PM0.054 $的平均骰子相似度系数(DSC),显着优于原始的Meshsegnet。在第二阶段,PointNet-Reg实现了0.597 \ PM0.761 \,预测和地面真理之间的平均绝对误差(MAE),以66美元的地标,与地标检测的其他网络相比,比较优越。所有这些结果表明我们在临床实践中的TS-MDL潜在使用。
translated by 谷歌翻译
骨质疏松症是一种常见的慢性代谢骨病,通常是由于对骨矿物密度(BMD)检查有限的有限获得而被诊断和妥善治疗,例如。通过双能X射线吸收测定法(DXA)。在本文中,我们提出了一种方法来预测来自胸X射线(CXR)的BMD,最常见的和低成本的医学成像考试之一。我们的方法首先自动检测来自CXR的局部和全球骨骼结构的感兴趣区域(ROI)。然后,开发了一种具有变压器编码器的多ROI深模型,以利用胸部X射线图像中的本地和全局信息以进行准确的BMD估计。我们的方法在13719 CXR患者病例中进行评估,并通过金标准DXA测量其实际BMD评分。该模型预测的BMD与地面真理(Pearson相关系数0.889腰腰1)具有强烈的相关性。当施用骨质疏松症筛查时,它实现了高分类性能(腰腰1的AUC 0.963)。作为现场使用CXR扫描预测BMD的第一次努力,所提出的算法在早期骨质疏松症筛查和公共卫生促进中具有很强的潜力。
translated by 谷歌翻译
作为许多医疗应用的重要上游任务,监督的地标本地化仍然需要不可忽略的注释成本才能实现理想的绩效。此外,由于繁琐的收集程序,医疗地标数据集的规模有限,会影响大规模自我监督的预训练方法的有效性。为了应对这些挑战,我们提出了一个两阶段的单次医疗地标本地化框架,该框架首先通过无监督的注册从标记的示例中删除了地标,以便未​​标记的目标,然后利用这些嘈杂的伪标签来训练健壮的探测器。为了处理重要的结构变化,我们在包含边缘信息的新型损失函数的指导下学习了全球对齐和局部变形的端到端级联。在第二阶段,我们探索了选择可靠的伪标签和半监视学习的跨矛盾的自持矛盾。我们的方法在不同身体部位的公共数据集上实现了最先进的表现,这证明了其一般适用性。
translated by 谷歌翻译
Color fundus photography and Optical Coherence Tomography (OCT) are the two most cost-effective tools for glaucoma screening. Both two modalities of images have prominent biomarkers to indicate glaucoma suspected. Clinically, it is often recommended to take both of the screenings for a more accurate and reliable diagnosis. However, although numerous algorithms are proposed based on fundus images or OCT volumes in computer-aided diagnosis, there are still few methods leveraging both of the modalities for the glaucoma assessment. Inspired by the success of Retinal Fundus Glaucoma Challenge (REFUGE) we held previously, we set up the Glaucoma grAding from Multi-Modality imAges (GAMMA) Challenge to encourage the development of fundus \& OCT-based glaucoma grading. The primary task of the challenge is to grade glaucoma from both the 2D fundus images and 3D OCT scanning volumes. As part of GAMMA, we have publicly released a glaucoma annotated dataset with both 2D fundus color photography and 3D OCT volumes, which is the first multi-modality dataset for glaucoma grading. In addition, an evaluation framework is also established to evaluate the performance of the submitted methods. During the challenge, 1272 results were submitted, and finally, top-10 teams were selected to the final stage. We analysis their results and summarize their methods in the paper. Since all these teams submitted their source code in the challenge, a detailed ablation study is also conducted to verify the effectiveness of the particular modules proposed. We find many of the proposed techniques are practical for the clinical diagnosis of glaucoma. As the first in-depth study of fundus \& OCT multi-modality glaucoma grading, we believe the GAMMA Challenge will be an essential starting point for future research.
translated by 谷歌翻译
本文提出了一种名为定位变压器(LOTR)的新型变压器的面部地标定位网络。所提出的框架是一种直接坐标回归方法,利用变压器网络以更好地利用特征图中的空间信息。 LOTR模型由三个主要模块组成:1)将输入图像转换为特征图的视觉骨干板,2)改进Visual Backone的特征表示,以及3)直接预测的地标预测头部的变压器模块来自变压器的代表的地标坐标。给定裁剪和对齐的面部图像,所提出的LOTR可以训练结束到底,而无需任何后处理步骤。本文还介绍了光滑翼损失功能,它解决了机翼损耗的梯度不连续性,导致比L1,L2和机翼损耗等标准损耗功能更好地收敛。通过106点面部地标定位的第一个大挑战提供的JD地标数据集的实验结果表明了LOTR在排行榜上的现有方法和最近基于热爱的方法的优势。在WFLW DataSet上,所提出的Lotr框架与若干最先进的方法相比,展示了有希望的结果。此外,我们在使用我们提出的LOTRS面向对齐时,我们报告了最先进的面部识别性能的提高。
translated by 谷歌翻译
准确的面部标志是许多与人面孔有关的任务的重要先决条件。在本文中,根据级联变压器提出了精确的面部标志性检测器。我们将面部标志性检测作为坐标回归任务,以便可以端对端训练该模型。通过在变压器中的自我注意力,我们的模型可以固有地利用地标之间的结构化关系,这将受益于在挑战性条件(例如大姿势和遮挡)下具有里程碑意义的检测。在级联精炼期间,我们的模型能够根据可变形的注意机制提取目标地标周围的最相关图像特征,以进行坐标预测,从而带来更准确的对齐。此外,我们提出了一个新颖的解码器,可以同时完善图像特征和地标性位置。随着参数增加,检测性能进一步提高。我们的模型在几个标准的面部标准检测基准上实现了新的最新性能,并在跨数据库评估中显示出良好的概括能力。
translated by 谷歌翻译
自动检测视网膜结构,例如视网膜血管(RV),凹起的血管区(FAZ)和视网膜血管连接(RVJ),对于了解眼睛的疾病和临床决策非常重要。在本文中,我们提出了一种新型的基于投票的自适应特征融合多任务网络(VAFF-NET),用于在光学相干性层析成像(OCTA)中对RV,FAZ和RVJ进行联合分割,检测和分类。提出了一个特定于任务的投票门模块,以适应并融合两个级别的特定任务的不同功能:来自单个编码器的不同空间位置的特征,以及来自多个编码器的功能。特别是,由于八八座图像中微脉管系统的复杂性使视网膜血管连接连接到分叉/跨越具有挑战性的任务的同时定位和分类,因此我们通过结合热图回归和网格分类来专门设计任务头。我们利用来自各种视网膜层的三个不同的\ textit {en face}血管造影,而不是遵循仅使用单个\ textit {en face}的现有方法。为了促进进一步的研究,已经发布了这些数据集的部分数据集,并已发布了公共访问:https://github.com/imed-lab/vaff-net。
translated by 谷歌翻译
注释的医学图像昂贵,有时甚至无法在一定程度上获得地标检测精度。半监督学习通过利用未标记的数据来了解解剖标志性的人口结构来减轻对大规模注释数据的依赖。全局形状约束是解剖标识的固有属性,为更加一致的伪标签提供了有价值的指导,这些指南在先前的半监督方法中被忽略。在本文中,我们通过完全考虑全局形状约束,提出了一种用于半监控地标检测的模型 - 不可知的形状调节的自我训练框架。具体而言,为了确保伪标签是可靠且保持一致的,基于PCA的形状模型调整伪标签并消除异常。一种新的区域注意力损失,使网络自动关注伪标签周围的结构一致区域。广泛的实验表明,我们的方法优于其他半监督方法,并在三个医学图像数据集中实现了显着的改进。此外,我们的框架是灵活的,可用作集成到最具监控方法的即插即用模块,以进一步提高性能。
translated by 谷歌翻译
本文调查了2D全身人类姿势估计的任务,该任务旨在将整个人体(包括身体,脚,脸部和手)局部定位在整个人体上。我们提出了一种称为Zoomnet的单网络方法,以考虑到完整人体的层次结构,并解决不同身体部位的规模变化。我们进一步提出了一个称为Zoomnas的神经体系结构搜索框架,以促进全身姿势估计的准确性和效率。Zoomnas共同搜索模型体系结构和不同子模块之间的连接,并自动为搜索的子模块分配计算复杂性。为了训练和评估Zoomnas,我们介绍了第一个大型2D人类全身数据集,即可可叶全体V1.0,它注释了133个用于野外图像的关键点。广泛的实验证明了Zoomnas的有效性和可可叶v1.0的重要性。
translated by 谷歌翻译
随着LIDAR传感器在自动驾驶中的流行率,3D对象跟踪受到了越来越多的关注。在点云序列中,3D对象跟踪旨在预测给定对象模板中连续帧中对象的位置和方向。在变压器成功的驱动下,我们提出了点跟踪变压器(PTTR),它有效地预测了高质量的3D跟踪,借助变压器操作,以粗到1的方式导致。 PTTR由三个新型设计组成。 1)我们设计的关系意识采样代替随机抽样,以在亚采样过程中保留与给定模板相关的点。 2)我们提出了一个点关系变压器,以进行有效的特征聚合和模板和搜索区域之间的特征匹配。 3)基于粗糙跟踪结果,我们采用了一个新颖的预测改进模块,通过局部特征池获得最终的完善预测。此外,以捕获对象运动的鸟眼视图(BEV)的有利特性(BEV)的良好属性,我们进一步设计了一个名为PTTR ++的更高级的框架,该框架既包含了点的视图和BEV表示)产生高质量跟踪结果的影响。 PTTR ++实质上提高了PTTR顶部的跟踪性能,并具有低计算开销。多个数据集的广泛实验表明,我们提出的方法达到了卓越的3D跟踪准确性和效率。
translated by 谷歌翻译
Due to object detection's close relationship with video analysis and image understanding, it has attracted much research attention in recent years. Traditional object detection methods are built on handcrafted features and shallow trainable architectures. Their performance easily stagnates by constructing complex ensembles which combine multiple low-level image features with high-level context from object detectors and scene classifiers. With the rapid development in deep learning, more powerful tools, which are able to learn semantic, high-level, deeper features, are introduced to address the problems existing in traditional architectures. These models behave differently in network architecture, training strategy and optimization function, etc. In this paper, we provide a review on deep learning based object detection frameworks. Our review begins with a brief introduction on the history of deep learning and its representative tool, namely Convolutional Neural Network (CNN). Then we focus on typical generic object detection architectures along with some modifications and useful tricks to improve detection performance further. As distinct specific detection tasks exhibit different characteristics, we also briefly survey several specific tasks, including salient object detection, face detection and pedestrian detection. Experimental analyses are also provided to compare various methods and draw some meaningful conclusions. Finally, several promising directions and tasks are provided to serve as guidelines for future work in both object detection and relevant neural network based learning systems.
translated by 谷歌翻译
With the rapid advances of image editing techniques in recent years, image manipulation detection has attracted considerable attention since the increasing security risks posed by tampered images. To address these challenges, a novel multi-scale multi-grained deep network (MSMG-Net) is proposed to automatically identify manipulated regions. In our MSMG-Net, a parallel multi-scale feature extraction structure is used to extract multi-scale features. Then the multi-grained feature learning is utilized to perceive object-level semantics relation of multi-scale features by introducing the shunted self-attention. To fuse multi-scale multi-grained features, global and local feature fusion block are designed for manipulated region segmentation by a bottom-up approach and multi-level feature aggregation block is designed for edge artifacts detection by a top-down approach. Thus, MSMG-Net can effectively perceive the object-level semantics and encode the edge artifact. Experimental results on five benchmark datasets justify the superior performance of the proposed method, outperforming state-of-the-art manipulation detection and localization methods. Extensive ablation experiments and feature visualization demonstrate the multi-scale multi-grained learning can present effective visual representations of manipulated regions. In addition, MSMG-Net shows better robustness when various post-processing methods further manipulate images.
translated by 谷歌翻译
现代的高性能语义分割方法采用沉重的主链和扩张的卷积来提取相关特征。尽管使用上下文和语义信息提取功能对于分割任务至关重要,但它为实时应用程序带来了内存足迹和高计算成本。本文提出了一种新模型,以实现实时道路场景语义细分的准确性/速度之间的权衡。具体来说,我们提出了一个名为“比例吸引的条带引导特征金字塔网络”(s \ textsuperscript {2} -fpn)的轻巧模型。我们的网络由三个主要模块组成:注意金字塔融合(APF)模块,比例吸引条带注意模块(SSAM)和全局特征Upsample(GFU)模块。 APF采用了注意力机制来学习判别性多尺度特征,并有助于缩小不同级别之间的语义差距。 APF使用量表感知的关注来用垂直剥离操作编码全局上下文,并建模长期依赖性,这有助于将像素与类似的语义标签相关联。此外,APF还采用频道重新加权块(CRB)来强调频道功能。最后,S \ TextSuperScript {2} -fpn的解码器然后采用GFU,该GFU用于融合APF和编码器的功能。已经对两个具有挑战性的语义分割基准进行了广泛的实验,这表明我们的方法通过不同的模型设置实现了更好的准确性/速度权衡。提出的模型已在CityScapes Dataset上实现了76.2 \%miou/87.3fps,77.4 \%miou/67fps和77.8 \%miou/30.5fps,以及69.6 \%miou,71.0 miou,71.0 \%miou,和74.2 \%\%\%\%\%\%。 miou在Camvid数据集上。这项工作的代码将在\ url {https://github.com/mohamedac29/s2-fpn提供。
translated by 谷歌翻译
近年来,人群计数研究取得了重大进展。然而,随着人群中存在具有挑战性的规模变化和复杂的场景,传统的卷积网络和最近具有固定大小的变压器架构都不能良好地处理任务。为了解决这个问题,本文提出了一个场景 - 自适应关注网络,称为Saanet。首先,我们设计了可变形的变压器骨干内的可变形关注,从而了解具有可变形采样位置和动态注意力的自适应特征表示。然后,我们提出了多级特征融合和计数专注特征增强模块,以加强全局图像上下文下的特征表示。学习的陈述可以参加前景,并适应不同的人群。我们对四个具有挑战性的人群计数基准进行广泛的实验,表明我们的方法实现了最先进的性能。特别是,我们的方法目前在NWPU-Crowd基准的公共排行榜上排名第一。我们希望我们的方法可能是一个强大的基线,以支持人群计数的未来研究。源代码将被释放到社区。
translated by 谷歌翻译
作为新一代神经体系结构的变形金刚在自然语言处理和计算机视觉方面表现出色。但是,现有的视觉变形金刚努力使用有限的医学数据学习,并且无法概括各种医学图像任务。为了应对这些挑战,我们将Medformer作为数据量表变压器呈现为可推广的医学图像分割。关键设计结合了理想的电感偏差,线性复杂性的层次建模以及以空间和语义全局方式以线性复杂性的关注以及多尺度特征融合。 Medformer可以在不预训练的情况下学习微小至大规模的数据。广泛的实验表明,Medformer作为一般分割主链的潜力,在三个具有多种模式(例如CT和MRI)和多样化的医学靶标(例如,健康器官,疾病,疾病组织和肿瘤)的三个公共数据集上优于CNN和视觉变压器。我们将模型和评估管道公开可用,为促进广泛的下游临床应用提供固体基线和无偏比较。
translated by 谷歌翻译
Due to its importance in facial behaviour analysis, facial action unit (AU) detection has attracted increasing attention from the research community. Leveraging the online knowledge distillation framework, we propose the ``FANTrans" method for AU detection. Our model consists of a hybrid network of convolution and transformer blocks to learn per-AU features and to model AU co-occurrences. The model uses a pre-trained face alignment network as the feature extractor. After further transformation by a small learnable add-on convolutional subnet, the per-AU features are fed into transformer blocks to enhance their representation. As multiple AUs often appear together, we propose a learnable attention drop mechanism in the transformer block to learn the correlation between the features for different AUs. We also design a classifier that predicts AU presence by considering all AUs' features, to explicitly capture label dependencies. Finally, we make the attempt of adapting online knowledge distillation in the training stage for this task, further improving the model's performance. Experiments on the BP4D and DISFA datasets demonstrating the effectiveness of proposed method.
translated by 谷歌翻译
法医分析取决于从操纵图像识别隐藏迹线。由于它们无法处理功能衰减和依赖主导空间特征,传统的神经网络失败。在这项工作中,我们提出了一种新颖的门控语言注意力网络(GCA-NET),用于全球背景学习的非本地关注块。另外,我们利用所通用的注意机制结合密集的解码器网络,以引导在解码阶段期间的相关特征的流动,允许精确定位。所提出的注意力框架允许网络通过过滤粗糙度来专注于相关区域。此外,通过利用多尺度特征融合和有效的学习策略,GCA-Net可以更好地处理操纵区域的比例变化。我们表明,我们的方法在多个基准数据集中平均优于最先进的网络,平均为4.2%-5.4%AUC。最后,我们还开展了广泛的消融实验,以展示该方法对图像取证的鲁棒性。
translated by 谷歌翻译