便携式仰卧胸部X光片的图像质量由于对比度低和高噪声而固有地差。气管插管检测需要气管管(ETT)尖端和Carina的位置。目的是在胸部射线照相中找到ETT尖端和Carina之间的距离。为了克服此类问题,我们建议使用Mask R-CNN提出特征提取方法。蒙版R-CNN预测图像中的管子和气管分叉。然后,使用特征提取方法来找到ETT尖端的特征点和Carina的特征点。因此,可以获得ETT-Carina距离。在我们的实验中,我们的结果在召回和精度方面可能超过96 \%。此外,对象错误小于$ 4.7751 \ pm 5.3420 $毫米,ETT-Carina距离错误小于$ 5.5432 \ pm 6.3100 $ mm。外部验证表明,所提出的方法是高舒适性系统。根据Pearson相关系数,我们在董事会认证的强化主义者与ETT-Carina距离方面有着很强的相关性。
translated by 谷歌翻译
外围插入的中央导管(PICC)由于其长期的血管内渗透感具有低感染率,因此已被广泛用作代表性的中央静脉线(CVC)之一。但是,PICC的尖端错位频率很高,增加了刺穿,栓塞和心律不齐等并发症的风险。为了自动,精确地检测到它,使用最新的深度学习(DL)技术进行了各种尝试。但是,即使采用了这些方法,实际上仍然很难确定尖端位置,因为多个片段现象(MFP)发生在预测和提取PICC线之前预测尖端之前所需的PICC线的过程。这项研究旨在开发一种通常应用于现有模型的系统,并通过删除模型输出的MF来更准确地恢复PICC线路,从而精确地定位了检测其处置的实际尖端位置。为此,我们提出了一个基于多阶段DL的框架后处理,以后处理现有技术的PICC线提取结果。根据是否将MFCN应用于五个常规模型,将每个均方根误差(RMSE)和MFP发病率比较性能。在内部验证中,当将MFCN应用于现有单个模型时,MFP平均提高了45%。 RMSE从平均26.85mm(17.16至35.80mm)到9.72mm(9.37至10.98mm)的平均增长了63%以上。在外部验证中,当应用MFCN时,MFP的发病率平均下降32%,RMSE平均下降了65 \%。因此,通过应用提出的MFCN,我们观察到与现有模型相比,PICC尖端位置的显着/一致检测性能提高。
translated by 谷歌翻译
Letting a deep network be aware of the quality of its own predictions is an interesting yet important problem. In the task of instance segmentation, the confidence of instance classification is used as mask quality score in most instance segmentation frameworks. However, the mask quality, quantified as the IoU between the instance mask and its ground truth, is usually not well correlated with classification score. In this paper, we study this problem and propose Mask Scoring R-CNN which contains a network block to learn the quality of the predicted instance masks. The proposed network block takes the instance feature and the corresponding predicted mask together to regress the mask IoU. The mask scoring strategy calibrates the misalignment between mask quality and mask score, and improves instance segmentation performance by prioritizing more accurate mask predictions during COCO AP evaluation. By extensive evaluations on the COCO dataset, Mask Scoring R-CNN brings consistent and noticeable gain with different models, and outperforms the state-of-the-art Mask R-CNN. We hope our simple and effective approach will provide a new direction for improving instance segmentation. The source code of our method is available at https:// github.com/zjhuang22/maskscoring_rcnn. * The work was done when Zhaojin Huang was an intern in Horizon Robotics Inc.
translated by 谷歌翻译
对骨关节炎(OA)的磁共振成像(MRI)扫描的客观评估可以解决当前OA评估的局限性。 OA客观评估是必需的骨,软骨和关节液的分割。大多数提出的分割方法都不执行实例分割,并且遭受了类不平衡问题。这项研究部署了蒙版R-CNN实例分割并改进了IT(改进的面罩R-CNN(IMASKRCNN)),以获得与OA相关组织的更准确的广义分割。该方法的训练和验证是使用骨关节炎倡议(OAI)数据集的500次MRI膝盖和有症状髋关节OA患者的97次MRI扫描进行的。掩盖R-CNN的三个修改产生了iMaskRCNN:添加第二个Roialigned块,在掩码标先中添加了额外的解码器层,并通过跳过连接将它们连接起来。使用Hausdorff距离,骰子评分和变异系数(COV)评估结果。与面膜RCNN相比,iMaskRCNN导致骨骼和软骨分割的改善,这表明股骨的骰子得分从95%增加到98%,胫骨的95%到97%,股骨软骨的71%至80%,81%和81%胫骨软骨的%至82%。对于积液检测,iMaskRCNN 72%比MaskRCNN 71%改善了骰子。 Reader1和Mask R-CNN(0.33),Reader1和ImaskRCNN(0.34),Reader2和Mask R-CNN(0.22),Reader2和iMaskRCNN(0.29)之间的积液检测的COV值(0.34),读取器2和mask r-CNN(0.22)接近COV之间,表明人类读者与蒙版R-CNN和ImaskRCNN之间的一致性很高。蒙版R-CNN和ImaskRCNN可以可靠,同时提取与OA有关的不同规模的关节组织,从而为OA的自动评估构成基础。 iMaskRCNN结果表明,修改改善了边缘周围的网络性能。
translated by 谷歌翻译
作为一线诊断成像方式,射线照相在早期检测髋关节发育不良(DDH)中起着至关重要的作用。在临床上,DDH的诊断依赖于手动测量和对骨盆X光片不同解剖特征的主观评估。这个过程效率低下且容易出错,需要多年的临床经验。在这项研究中,我们提出了一个基于深度学习的系统,该系统自动从X光片中自动检测14个关键点,测量三个解剖学角度(中心边缘,T \“ Onnis和Sharp Angles),并将DDH臀部分类为I-IV级别I-IV级别此外,提出了一种新型数据驱动的评分系统,以定量地整合DDH诊断的信息。提出的键盘检测模型达到了平均值(95%置信区间[CI])的平均精度为0.807) (0.804-0.810。 )和0.953(0.947-0.960),它们明显高于经验丰富的骨科医生(p <0.0001)。此外,使用拟议的得分获得的平均(95%CI)测试诊断协议(Cohen's Kappa)系统为0.84(0.83-0.85),whi CH显着高于从诊断标准获得的单个角度(0.76 [0.75-0.77])和骨科医生(0.71 [0.63-0.79])的CH。据我们所知,这是通过利用深度学习关键点检测和整合不同解剖学测量值的首次进行客观DDH诊断的研究,这可以为临床决策提供可靠且可解释的支持。
translated by 谷歌翻译
这项研究提出了一种基于深度学习的超声(US)图像引导放射疗法的跟踪方法。拟议的级联深度学习模型由注意力网络,基于掩模区域的卷积神经网络(Mask R-CNN)和长期短期记忆(LSTM)网络组成。注意网络从美国图像到可疑的具有里程碑意义的运动区域,以减少搜索区域。然后,面膜R-CNN在减少区域中产生多个利益区域(ROI)建议,并通过三个网络头确定拟议的地标:边界框回归,提案分类和地标分段。 LSTM网络对连续的图像框架之间的时间关系建模,以进行边界框回归和建议分类。为了合并最终建议,根据顺序框架之间的相似性设计选择方法。该方法在肝脏美国跟踪数据集中测试了医疗图像计算和计算机辅助干预措施(MICCAI)2015年的挑战,其中有三位经验丰富的观察者注释了地标,以获得其平均位置。在24个鉴于我们具有地面真相的序列的24个序列上,所有地标的平均跟踪误差为0.65 +/- 0.56毫米,所有地标的误差均在2 mm之内。我们进一步测试了从测试数据集中的69个地标上提出的模型,该模型具有与训练模式相似的图像模式,从而导致平均跟踪误差为0.94 +/- 0.83 mm。我们的实验结果表明,我们提出的方法使用US图像跟踪肝解剖学地标的可行性和准确性,为放射治疗期间的主动运动管理提供了潜在的解决方案。
translated by 谷歌翻译
肺超声(LUS)可能是唯一可用于连续和周期性监测肺的医学成像方式。这对于在肺部感染开始期间跟踪肺表现或跟踪疫苗接种对肺部的影响非常有用,如Covid-19中的肺部作用。有许多尝试将肺严重程度分为各个类别或自动分割各种LUS地标和表现形式的尝试。但是,所有这些方法均基于训练静态机器学习模型,该模型需要大量临床注释的大数据集,并且在计算上是沉重的,并且大部分时间非现实时间。在这项工作中,提出了一种实时重量的基于活跃的学习方法,以在资源约束设置中在COVID-19的受试者中更快地进行分类。该工具基于您看起来仅一次(YOLO)网络,具有基于各种LUS地标,人工制品和表现形式的标识,肺部感染严重程度的预测,基于主动学习的可能性,提供图像质量的能力。临床医生的反馈或图像质量以及对感染严重程度高的重要框架的汇总,以进一步分析。结果表明,对于LUS地标的预测,该提议的工具在联合(IOU)阈值的交叉点上的平均平均精度(MAP)为66%。在Quadro P4000 GPU运行时,14MB轻量级Yolov5S网络可实现123 fps。该工具可根据作者的要求进行使用和分析。
translated by 谷歌翻译
痤疮检测对于解释性诊断和对皮肤疾病的精确治疗至关重要。任意边界和痤疮病变的尺寸较小,导致在两阶段检测中大量质量较差的建议。在本文中,我们提出了一个针对地区建议网络的新型头部结构,以两种方式提高建议的质量。首先,提出了一个空间意识的双头(SADH)结构,以从两个不同的空间角度从分类和本地化进行分类和本地化的表示。拟议的SADH确保了更陡峭的分类信心梯度,并抑制了与匹配的地面真理相交(IOU)低相交(IOU)的建议。然后,我们提出了一个归一化的Wasserstein距离预测分支,以改善提议分类评分与IOU之间的相关性。此外,为了促进痤疮检测的进一步研究,我们构建了一个名为Acnescu的新数据集,具有高分辨率成像,精确的注释和细粒度的病变类别。对AcnesCU和公共数据集Acne04进行了广泛的实验,结果表明该方法可以提高建议的质量,始终超过最先进的方法。代码和收集的数据集可在https://github.com/pingguokiller/acnedetection中找到。
translated by 谷歌翻译
X-ray imaging technology has been used for decades in clinical tasks to reveal the internal condition of different organs, and in recent years, it has become more common in other areas such as industry, security, and geography. The recent development of computer vision and machine learning techniques has also made it easier to automatically process X-ray images and several machine learning-based object (anomaly) detection, classification, and segmentation methods have been recently employed in X-ray image analysis. Due to the high potential of deep learning in related image processing applications, it has been used in most of the studies. This survey reviews the recent research on using computer vision and machine learning for X-ray analysis in industrial production and security applications and covers the applications, techniques, evaluation metrics, datasets, and performance comparison of those techniques on publicly available datasets. We also highlight some drawbacks in the published research and give recommendations for future research in computer vision-based X-ray analysis.
translated by 谷歌翻译
We present a new, embarrassingly simple approach to instance segmentation. Compared to many other dense prediction tasks, e.g., semantic segmentation, it is the arbitrary number of instances that have made instance segmentation much more challenging. In order to predict a mask for each instance, mainstream approaches either follow the "detect-then-segment" strategy (e.g., Mask R-CNN), or predict embedding vectors first then use clustering techniques to group pixels into individual instances. We view the task of instance segmentation from a completely new perspective by introducing the notion of "instance categories", which assigns categories to each pixel within an instance according to the instance's location and size, thus nicely converting instance segmentation into a single-shot classification-solvable problem. We demonstrate a much simpler and flexible instance segmentation framework with strong performance, achieving on par accuracy with Mask R-CNN and outperforming recent single-shot instance segmenters in accuracy. We hope that this simple and strong framework can serve as a baseline for many instance-level recognition tasks besides instance segmentation. Code is available at https://git.io/AdelaiDet
translated by 谷歌翻译
In object detection, the intersection over union (IoU) threshold is frequently used to define positives/negatives. The threshold used to train a detector defines its quality. While the commonly used threshold of 0.5 leads to noisy (low-quality) detections, detection performance frequently degrades for larger thresholds. This paradox of high-quality detection has two causes: 1) overfitting, due to vanishing positive samples for large thresholds, and 2) inference-time quality mismatch between detector and test hypotheses. A multi-stage object detection architecture, the Cascade R-CNN, composed of a sequence of detectors trained with increasing IoU thresholds, is proposed to address these problems. The detectors are trained sequentially, using the output of a detector as training set for the next. This resampling progressively improves hypotheses quality, guaranteeing a positive training set of equivalent size for all detectors and minimizing overfitting. The same cascade is applied at inference, to eliminate quality mismatches between hypotheses and detectors. An implementation of the Cascade R-CNN without bells or whistles achieves state-of-the-art performance on the COCO dataset, and significantly improves high-quality detection on generic and specific object detection datasets, including VOC, KITTI, CityPerson, and WiderFace. Finally, the Cascade R-CNN is generalized to instance segmentation, with nontrivial improvements over the Mask R-CNN. To facilitate future research, two implementations are made available at https://github.com/zhaoweicai/cascade-rcnn (Caffe) and https://github.com/zhaoweicai/Detectron-Cascade-RCNN (Detectron).
translated by 谷歌翻译
We aim to detect all instances of a category in an image and, for each instance, mark the pixels that belong to it. We call this task Simultaneous Detection and Segmentation (SDS). Unlike classical bounding box detection, SDS requires a segmentation and not just a box. Unlike classical semantic segmentation, we require individual object instances. We build on recent work that uses convolutional neural networks to classify category-independent region proposals (R-CNN [16]), introducing a novel architecture tailored for SDS. We then use category-specific, topdown figure-ground predictions to refine our bottom-up proposals. We show a 7 point boost (16% relative) over our baselines on SDS, a 5 point boost (10% relative) over state-of-the-art on semantic segmentation, and state-of-the-art performance in object detection. Finally, we provide diagnostic tools that unpack performance and provide directions for future work.
translated by 谷歌翻译
自动核细胞分割和分类在数字病理学中起着至关重要的作用。但是,以前的作品主要基于具有有限的多样性和小尺寸的数据构建,使得在实际下游任务中的结果可疑或误导。在本文中,我们的目标是建立一种可靠且强大的方法,能够处理“临床野生”中的数据。具体地,我们研究和设计一种同时检测,分段和分类来自血红素和曙红(H&E)染色的组织病理学数据的新方法,并使用最近的最大数据集评估我们的方法:Pannuke。我们以新颖的语义关键点估计问题解决每个核的检测和分类,以确定每个核的中心点。接下来,使用动态实例分段获得核心点的相应类别 - 不可止液掩模。通过解耦两个同步具有挑战性的任务,我们的方法可以从类别感知的检测和类别不可知的细分中受益,从而导致显着的性能提升。我们展示了我们提出的核细胞分割和分类方法的卓越性能,跨越19种不同的组织类型,提供了新的基准结果。
translated by 谷歌翻译
由于锥形光束计算机断层扫描(CBCT)图像的三维(3D)单个齿的准确和自动分割是一个具有挑战性的问题,因为难以将个体齿与相邻齿和周围的肺泡骨分开。因此,本文提出了一种从牙科CBCT图像识别和分割3D个体齿的全自动方法。所提出的方法通过开发基于深度学习的分层多步模型来解决上述难度。首先,它自动生成上下钳口全景图像,以克服由高维数据和与有限训练数据集相关的维度的诅咒引起的计算复杂度。然后使用所获得的2D全景图像来识别2D单独的牙齿并捕获3D个体齿的兴趣和紧密区域(ROIS)。最后,使用松动和紧密的ROI实现了精确的3D个体齿分割。实验结果表明,牙齿识别的牙齿识别的F1分数为93.35%,对于各个3D齿分割,骰子相似度系数为94.79%。结果表明,该方法为数字牙科提供了有效的临床和实用框架。
translated by 谷歌翻译
To analyze this characteristic of vulnerability, we developed an automated deep learning method for detecting microvessels in intravascular optical coherence tomography (IVOCT) images. A total of 8,403 IVOCT image frames from 85 lesions and 37 normal segments were analyzed. Manual annotation was done using a dedicated software (OCTOPUS) previously developed by our group. Data augmentation in the polar (r,{\theta}) domain was applied to raw IVOCT images to ensure that microvessels appear at all possible angles. Pre-processing methods included guidewire/shadow detection, lumen segmentation, pixel shifting, and noise reduction. DeepLab v3+ was used to segment microvessel candidates. A bounding box on each candidate was classified as either microvessel or non-microvessel using a shallow convolutional neural network. For better classification, we used data augmentation (i.e., angle rotation) on bounding boxes with a microvessel during network training. Data augmentation and pre-processing steps improved microvessel segmentation performance significantly, yielding a method with Dice of 0.71+/-0.10 and pixel-wise sensitivity/specificity of 87.7+/-6.6%/99.8+/-0.1%. The network for classifying microvessels from candidates performed exceptionally well, with sensitivity of 99.5+/-0.3%, specificity of 98.8+/-1.0%, and accuracy of 99.1+/-0.5%. The classification step eliminated the majority of residual false positives, and the Dice coefficient increased from 0.71 to 0.73. In addition, our method produced 698 image frames with microvessels present, compared to 730 from manual analysis, representing a 4.4% difference. When compared to the manual method, the automated method improved microvessel continuity, implying improved segmentation performance. The method will be useful for research purposes as well as potential future treatment planning.
translated by 谷歌翻译
Modern CNN-based object detectors rely on bounding box regression and non-maximum suppression to localize objects. While the probabilities for class labels naturally reflect classification confidence, localization confidence is absent. This makes properly localized bounding boxes degenerate during iterative regression or even suppressed during NMS. In the paper we propose IoU-Net learning to predict the IoU between each detected bounding box and the matched ground-truth. The network acquires this confidence of localization, which improves the NMS procedure by preserving accurately localized bounding boxes. Furthermore, an optimization-based bounding box refinement method is proposed, where the predicted IoU is formulated as the objective. Extensive experiments on the MS-COCO dataset show the effectiveness of IoU-Net, as well as its compatibility with and adaptivity to several state-of-the-art object detectors.
translated by 谷歌翻译
局灶性肝脏病变(FLLS)的计算机辅助诊断有助于改善工作流程并实现正确的诊断; FLL检测是如此计算机辅助诊断的第一步。尽管近期基于深度学习的方法在检测到FLLS方面取得了成功,但目前的方法对于评估未对准的多相数据来说是不够稳健的。通过在特征空间中引入注意引导的多相对准,本研究提出了一种用于检测来自多相计算机断层扫描(CT)图像的FLL的完全自动化的端到端学习框架。由于其完全基于学习的方法,我们的方法是对错位的多相图像的强大,这降低了模型对注册质量的敏感性,并且可以在临床实践中独立地部署模型。具有280名患者的大型数据集的评估证实,我们的方法优于先前的最先进的方法,并显着降低了使用未对准的多相CT图像检测FLL的性能劣化。所提出的方法的稳健性可以增强深学习的计算机辅助检测系统的临床采用。
translated by 谷歌翻译
颈腺细胞(GC)检测是计算机辅助诊断宫颈腺癌筛查的关键步骤。精确识别宫颈涂片中的GC是挑战的,其中鳞状细胞是主要的。在整个涂片线索中,广泛存在的分布(OOD)数据可降低机器学习系统用于GC检测的可靠性。尽管,最新的(SOTA)深度学习模型可以胜过感兴趣的预选区域中的病理学家,但是当面对这样的吉吉像素整个滑动图像时,质量假阳性(FP)预测仍无法解决。本文提出了一种基于GC的形态学知识,试图通过八邻居中的自我发项机制来解决FP问题的新极性知识。它估计了GC核的极性方向。作为插件模块,Polarnet可以指导一般对象检测模型的深度功能和预测的置信度。在实验中,我们发现基于四个不同框架的通用模型可以在小图像集中拒绝fp,并将平均精度(地图)的平均值增加$ \ text {0.007} \ sim \ sim \ text {0.015} $,其中平均最高超过了最近的宫颈细胞检测模型0.037。通过插入极地,部署的C ++程序在从外部WSI的前20个GC检测准确性上提高了8.8%,同时牺牲了14.4 s的计算时间。代码可在https://github.com/chrisa142857/polarnet-gcdet中找到
translated by 谷歌翻译
Mask r-cnn
分类:
We present a conceptually simple, flexible, and general framework for object instance segmentation. Our approach efficiently detects objects in an image while simultaneously generating a high-quality segmentation mask for each instance. The method, called Mask R-CNN, extends Faster R-CNN by adding a branch for predicting an object mask in parallel with the existing branch for bounding box recognition. Mask R-CNN is simple to train and adds only a small overhead to Faster R-CNN, running at 5 fps. Moreover, Mask R-CNN is easy to generalize to other tasks, e.g., allowing us to estimate human poses in the same framework. We show top results in all three tracks of the COCO suite of challenges, including instance segmentation, bounding-box object detection, and person keypoint detection. Without tricks, Mask R-CNN outperforms all existing, single-model entries on every task, including the COCO 2016 challenge winners. We hope our simple and effective approach will serve as a solid baseline and help ease future research in instance-level recognition. Code will be made available.
translated by 谷歌翻译
The cup-to-disc ratio (CDR) is one of the most significant indicator for glaucoma diagnosis. Different from the use of costly fully supervised learning formulation with pixel-wise annotations in the literature, this study investigates the feasibility of accurate CDR measurement in fundus images using only tight bounding box supervision. For this purpose, we develop a two-task network named as CDRNet for accurate CDR measurement, one for weakly supervised image segmentation, and the other for bounding-box regression. The weakly supervised image segmentation task is implemented based on generalized multiple instance learning formulation and smooth maximum approximation, and the bounding-box regression task outputs class-specific bounding box prediction in a single scale at the original image resolution. To get accurate bounding box prediction, a class-specific bounding-box normalizer and an expected intersection-over-union are proposed. In the experiments, the proposed approach was evaluated by a testing set with 1200 images using CDR error and $F_1$ score for CDR measurement and dice coefficient for image segmentation. A grader study was conducted to compare the performance of the proposed approach with those of individual graders. The experimental results indicate that the proposed approach outperforms the state-of-the-art performance obtained from the fully supervised image segmentation (FSIS) approach using pixel-wise annotation for CDR measurement. Its performance is also better than those of individual graders. In addition, the proposed approach gets performance close to the state-of-the-art obtained from FSIS and the performance of individual graders for optic cup and disc segmentation. The codes are available at \url{https://github.com/wangjuan313/CDRNet}.
translated by 谷歌翻译