胸部X射线(CXR)图像中的肺结节检测是肺癌的早期筛查。基于深度学习的计算机辅助诊断(CAD)系统可以支持放射线医生在CXR中进行结节筛选。但是,它需要具有高质量注释的大规模和多样化的医学数据,以训练这种强大而准确的CAD。为了减轻此类数据集的有限可用性,为了增加数据增强而提出了肺结核合成方法。然而,以前的方法缺乏产生结节的能力,这些结节与检测器所需的大小属性相关。为了解决这个问题,我们在本文中介绍了一种新颖的肺结综合框架,该框架分别将结节属性分为三个主要方面,包括形状,大小和纹理。基于GAN的形状生成器首先通过产生各种形状掩模来建模结节形状。然后,以下大小调制可以对像素级粒度中生成的结节形状的直径进行定量控制。一条粗到细门的卷积卷积纹理发生器最终合成了以调制形状掩模为条件的视觉上合理的结节纹理。此外,我们建议通过控制数据增强的分离结节属性来合成结节CXR图像,以便更好地补偿检测任务中容易错过的结节。我们的实验证明了所提出的肺结构合成框架的图像质量,多样性和可控性的增强。我们还验证了数据增强对大大改善结节检测性能的有效性。
translated by 谷歌翻译
膝关节骨关节炎(OA)是最常见的骨关节炎和伤残原因。软骨缺陷被认为是膝关节OA的主要表现,其通过磁共振成像(MRI)可见。因此,对膝关节软骨缺陷的早期检测和评估对于保护膝关节OA患者来说是重要的。通过这种方式,通过将卷积神经网络(CNNS)应用于膝关节MRI,已经在膝关节软骨缺陷评估中进行了许多尝试。然而,软骨的生理特性可能阻碍这种努力:软骨是薄的弯曲层,这意味着只有膝关节MRI中的一小部分体素可以有助于软骨缺陷评估;异构扫描方案进一步挑战CNN在临床实践中的可行性;基于CNN的膝关节软骨评估结果缺乏解释性。为了解决这些挑战,我们将软骨结构和外观模拟到膝关节MRI进入图表表示,该图表能够处理高度多样化的临床数据。然后,由软骨图表示指导,我们设计了一种具有自我关注机制的非欧几里德深度学习网络,提取本地和全局中的软骨功能,并通过可视化结果导出最终评估。我们的综合实验表明,该方法在膝关节软骨缺陷评估中产生了卓越的性能,以及其方便的可解释性3D可视化。
translated by 谷歌翻译
我们考虑临床应用异常定位问题。虽然深入学习推动了最近的医学成像进展,但许多临床挑战都没有完全解决,限制了其更广泛的使用。虽然最近的方法报告了高的诊断准确性,但医生因普遍缺乏算法决策和解释性而涉及诊断决策的这些算法,这是关注这些算法。解决这个问题的一种潜在方法是进一步培训这些模型,以便除了分类它们之外,除了分类。然而,准确地进行这一临床专家需要大量的疾病定位注释,这是对大多数应用程序来实现昂贵的任务。在这项工作中,我们通过一种新的注意力弱监督算法来解决这些问题,该弱势监督算法包括分层关注挖掘框架,可以以整体方式统一激活和基于梯度的视觉关注。我们的关键算法创新包括明确序号注意约束的设计,实现了以弱监督的方式实现了原则的模型培训,同时还通过本地化线索促进了产生视觉关注驱动的模型解释。在两个大型胸部X射线数据集(NIH Chescx-Ray14和Chexpert)上,我们展示了对现有技术的显着本地化性能,同时也实现了竞争的分类性能。我们的代码可在https://github.com/oyxhust/ham上找到。
translated by 谷歌翻译
病变检测是乳房X线照相术的计算机辅助诊断方案中的一个基本问题。如果培训数据在图像风格和质量方面,深度学习技术的进步对这项任务产生了显着的进展。特别地,图像样式的多样性可能主要归因于供应商因子。然而,尽可能多的供应商收集来自供应商的非常昂贵,并且有时对于实验室规模研究是不切实际的。因此,为了进一步将深度学习模型的泛化能力扩展到具有有限资源有限的各种供应商,开发了一种新的对比学习方案。具体地,骨干网络首先具有多种式和多视图无监督的自学习方案,用于将不变功能嵌入到各种供应商样式中。之后,用特定的监督学习重新校准骨干网络与病变检测的下游任务。所提出的方法是用来自四个供应商的乳房X线照片和一个看不见的公共数据集进行评估。实验结果表明,我们的方法可以有效地改善观察和看不见的域的检测性能,并且优于许多最先进的(SOTA)泛化方法。
translated by 谷歌翻译
Most existing text-video retrieval methods focus on cross-modal matching between the visual content of offline videos and textual query sentences. However, in real scenarios, online videos are frequently accompanied by relevant text information such as titles, tags, and even subtitles, which can be utilized to match textual queries. This inspires us to generate associated captions from offline videos to help with existing text-video retrieval methods. To do so, we propose to use the zero-shot video captioner with knowledge of pre-trained web-scale models (e.g., CLIP and GPT-2) to generate captions for offline videos without any training. Given the captions, one question naturally arises: what can auxiliary captions do for text-video retrieval? In this paper, we present a novel framework Cap4Video, which makes use of captions from three aspects: i) Input data: The video and captions can form new video-caption pairs as data augmentation for training. ii) Feature interaction: We perform feature interaction between video and caption to yield enhanced video representations. iii) Output score: The Query-Caption matching branch can be complementary to the original Query-Video matching branch for text-video retrieval. We conduct thorough ablation studies to demonstrate the effectiveness of our method. Without any post-processing, our Cap4Video achieves state-of-the-art performance on MSR-VTT (51.4%), VATEX (66.6%), MSVD (51.8%), and DiDeMo (52.0%).
translated by 谷歌翻译
Vision-language models (VLMs) that are pre-trained on large-scale image-text pairs have demonstrated impressive transferability on a wide range of visual tasks. Transferring knowledge from such powerful pre-trained VLMs is emerging as a promising direction for building effective video recognition models. However, the current exploration is still limited. In our opinion, the greatest charm of pre-trained vision-language models is to build a bridge between visual and textual domains. In this paper, we present a novel framework called BIKE which utilizes the cross-modal bridge to explore bidirectional knowledge: i) We propose a Video Attribute Association mechanism which leverages the Video-to-Text knowledge to generate textual auxiliary attributes to complement video recognition. ii) We also present a Temporal Concept Spotting mechanism which uses the Text-to-Video expertise to capture temporal saliency in a parameter-free manner to yield enhanced video representation. The extensive studies on popular video datasets (ie, Kinetics-400 & 600, UCF-101, HMDB-51 and ActivityNet) show that our method achieves state-of-the-art performance in most recognition scenarios, eg, general, zero-shot, and few-shot video recognition. To the best of our knowledge, our best model achieves a state-of-the-art accuracy of 88.4% on challenging Kinetics-400 with the released CLIP pre-trained model.
translated by 谷歌翻译
We propose a novel approach to self-supervised learning of point cloud representations by differentiable neural rendering. Motivated by the fact that informative point cloud features should be able to encode rich geometry and appearance cues and render realistic images, we train a point-cloud encoder within a devised point-based neural renderer by comparing the rendered images with real images on massive RGB-D data. The learned point-cloud encoder can be easily integrated into various downstream tasks, including not only high-level tasks like 3D detection and segmentation, but low-level tasks like 3D reconstruction and image synthesis. Extensive experiments on various tasks demonstrate the superiority of our approach compared to existing pre-training methods.
translated by 谷歌翻译
Automatic image colorization is a particularly challenging problem. Due to the high illness of the problem and multi-modal uncertainty, directly training a deep neural network usually leads to incorrect semantic colors and low color richness. Existing transformer-based methods can deliver better results but highly depend on hand-crafted dataset-level empirical distribution priors. In this work, we propose DDColor, a new end-to-end method with dual decoders, for image colorization. More specifically, we design a multi-scale image decoder and a transformer-based color decoder. The former manages to restore the spatial resolution of the image, while the latter establishes the correlation between semantic representations and color queries via cross-attention. The two decoders incorporate to learn semantic-aware color embedding by leveraging the multi-scale visual features. With the help of these two decoders, our method succeeds in producing semantically consistent and visually plausible colorization results without any additional priors. In addition, a simple but effective colorfulness loss is introduced to further improve the color richness of generated results. Our extensive experiments demonstrate that the proposed DDColor achieves significantly superior performance to existing state-of-the-art works both quantitatively and qualitatively. Codes will be made publicly available.
translated by 谷歌翻译
Masked Modeling (MM) has demonstrated widespread success in various vision challenges, by reconstructing masked visual patches. Yet, applying MM for large-scale 3D scenes remains an open problem due to the data sparsity and scene complexity. The conventional random masking paradigm used in 2D images often causes a high risk of ambiguity when recovering the masked region of 3D scenes. To this end, we propose a novel informative-preserved reconstruction, which explores local statistics to discover and preserve the representative structured points, effectively enhancing the pretext masking task for 3D scene understanding. Integrated with a progressive reconstruction manner, our method can concentrate on modeling regional geometry and enjoy less ambiguity for masked reconstruction. Besides, such scenes with progressive masking ratios can also serve to self-distill their intrinsic spatial consistency, requiring to learn the consistent representations from unmasked areas. By elegantly combining informative-preserved reconstruction on masked areas and consistency self-distillation from unmasked areas, a unified framework called MM-3DScene is yielded. We conduct comprehensive experiments on a host of downstream tasks. The consistent improvement (e.g., +6.1 mAP@0.5 on object detection and +2.2% mIoU on semantic segmentation) demonstrates the superiority of our approach.
translated by 谷歌翻译
End-to-end Speech Translation (E2E ST) aims to translate source speech into target translation without generating the intermediate transcript. However, existing approaches for E2E ST degrade considerably when only limited ST data are available. We observe that an ST model's performance strongly correlates with its embedding similarity from speech and transcript. In this paper, we propose Word-Aligned COntrastive learning (WACO), a novel method for few-shot speech-to-text translation. Our key idea is bridging word-level representations for both modalities via contrastive learning. We evaluate WACO and other methods on the MuST-C dataset, a widely used ST benchmark. Our experiments demonstrate that WACO outperforms the best baseline methods by 0.7-8.5 BLEU points with only 1-hour parallel data. Code is available at https://anonymous.4open.science/r/WACO .
translated by 谷歌翻译