磁共振图像(MRI)中的脑肿瘤分割(BTS)对于脑肿瘤诊断,癌症管理和研究目的至关重要。随着十年小型挑战的巨大成功以及CNN和Transformer算法的进步,已经提出了许多出色的BTS模型来解决BTS在不同技术方面的困难。但是,现有研究几乎没有考虑如何以合理的方式融合多模式图像。在本文中,我们利用了放射科医生如何从多种MRI模态诊断脑肿瘤的临床知识,并提出了一种称为CKD-TRANSBTS的临床知识驱动的脑肿瘤分割模型。我们没有直接串联所有模式,而是通过根据MRI的成像原理将输入方式分为两组来重新组织输入方式。具有拟议模态相关的跨意义块(MCCA)的双支支混合式编码器旨在提取多模式图像特征。所提出的模型以局部特征表示能力的能力来继承来自变压器和CNN的强度,以提供精确的病变边界和3D体积图像的远程特征提取。为了弥合变压器和CNN功能之间的间隙,我们提出了解码器中的反式和CNN功能校准块(TCFC)。我们将提出的模型与五个基于CNN的模型和六个基于Transformer的模型在Brats 2021挑战数据集上进行了比较。广泛的实验表明,与所有竞争对手相比,所提出的模型可实现最先进的脑肿瘤分割性能。
translated by 谷歌翻译
超声检查是乳腺癌诊断的重要常规检查,这是由于其无创,无辐射和低成本的特性。但是,由于其固有的局限性,乳腺癌的诊断准确性仍然受到限制。如果我们可以通过乳房超声图像(BUS)精确诊断乳腺癌,那将是一个巨大的成功。已经提出了许多基于学习的计算机辅助诊断方法来实现乳腺癌诊断/病变分类。但是,其中大多数需要预定的ROI,然后对ROI内的病变进行分类。常规的分类骨架,例如VGG16和RESNET50,可以在没有ROI要求的情况下获得有希望的分类结果。但是这些模型缺乏解释性,因此限制了它们在临床实践中的使用。在这项研究中,我们提出了一种具有可解释特征表示的超声图像中乳腺癌诊断的新型无ROI模型。我们利用解剖学的先验知识,即恶性肿瘤和良性肿瘤在不同的组织层之间具有不同的空间关系,并提出了悬停转换器来提出这种先验知识。提出的悬停式跨界块水平和垂直地提取层间和层内空间信息。我们进行并释放一个开放的数据集GDPH&SYSUCC,以用于公共汽车中的乳腺癌诊断。通过与四个基于CNN的模型和两个Vision Transformer模型进行比较,通过五倍的交叉验证来评估所提出的模型。它通过最佳模型可解释性实现最新的分类性能。同时,我们提出的模型在仅给出一张公交图像时,在乳腺癌诊断方面优于两名高级超声检查员。
translated by 谷歌翻译
故障诊断在许多领域至关重要,因为故障可能导致安全威胁或经济损失。在在线服务系统领域中,操作员依靠大量监视数据来检测和减轻故障。快速识别一组基础故障的根本原因指标可以节省大量时间减轻故障。在本文中,我们将根本原因分析问题作为一种新的因果推理任务,称为干预识别。我们提出了一种新型的无监督因果推理的方法,名为基于因果推理的根本原因分析(大约)。核心思想是一个足够的条件,可以使监视变量成为根本原因指标,即,因果关系贝叶斯网络(CBN)中父母的概率分布的变化。在在线服务系统中的应用程序中,大约根据系统体系结构的知识和一组因果假设在监视指标中构建图形。仿真研究说明了大约的理论可靠性。现实世界中数据集的性能进一步表明,大约可以将TOP-1建议的回忆提高到最佳基线方法的25%。
translated by 谷歌翻译
随着人工智能和机器学习的日益普及,文献中已经提出了针对深度学习模型的广泛攻击。逃避攻击和中毒攻击都试图利用对抗性变化的样本来欺骗受害者模型以错误地分类对抗样本。尽管这种攻击声称是隐形的,即对人的眼睛看不见,但很少评估这种说法。在本文中,我们介绍了第一个大规模研究,涉及对深度学习的攻击中使用的对抗样本的隐身性。我们已经对六个流行的基准数据集实施了20种代表性的对抗ML攻击。我们使用两种互补方法评估了攻击样本的隐身性:(1)一项数值研究,采用24个指标用于图像相似性或质量评估; (2)对3组问卷的用户研究,从1,000多个回答中收集了20,000多次注释。我们的结果表明,大多数现有攻击引入了不可忽略的扰动,这些扰动对人的眼睛并不隐秘。我们进一步分析了有助于攻击隐身性的因素。我们进一步研究了数值分析与用户研究之间的相关性,并证明某些图像质量指标可能在攻击设计中提供有用的指导,而评估的图像质量和攻击的视觉隐身性之间仍然存在显着差距。
translated by 谷歌翻译
Few Shot Instance Segmentation (FSIS) requires models to detect and segment novel classes with limited several support examples. In this work, we explore a simple yet unified solution for FSIS as well as its incremental variants, and introduce a new framework named Reference Twice (RefT) to fully explore the relationship between support/query features based on a Transformer-like framework. Our key insights are two folds: Firstly, with the aid of support masks, we can generate dynamic class centers more appropriately to re-weight query features. Secondly, we find that support object queries have already encoded key factors after base training. In this way, the query features can be enhanced twice from two aspects, i.e., feature-level and instance-level. In particular, we firstly design a mask-based dynamic weighting module to enhance support features and then propose to link object queries for better calibration via cross-attention. After the above steps, the novel classes can be improved significantly over our strong baseline. Additionally, our new framework can be easily extended to incremental FSIS with minor modification. When benchmarking results on the COCO dataset for FSIS, gFSIS, and iFSIS settings, our method achieves a competitive performance compared to existing approaches across different shots, e.g., we boost nAP by noticeable +8.2/+9.4 over the current state-of-the-art FSIS method for 10/30-shot. We further demonstrate the superiority of our approach on Few Shot Object Detection. Code and model will be available.
translated by 谷歌翻译
In this chapter, we review and discuss the transformation of AI technology in HCI/UX work and assess how AI technology will change how we do the work. We first discuss how AI can be used to enhance the result of user research and design evaluation. We then discuss how AI technology can be used to enhance HCI/UX design. Finally, we discuss how AI-enabled capabilities can improve UX when users interact with computing systems, applications, and services.
translated by 谷歌翻译
As one of the most important psychic stress reactions, micro-expressions (MEs), are spontaneous and transient facial expressions that can reveal the genuine emotions of human beings. Thus, recognizing MEs (MER) automatically is becoming increasingly crucial in the field of affective computing, and provides essential technical support in lie detection, psychological analysis and other areas. However, the lack of abundant ME data seriously restricts the development of cutting-edge data-driven MER models. Despite the recent efforts of several spontaneous ME datasets to alleviate this problem, it is still a tiny amount of work. To solve the problem of ME data hunger, we construct a dynamic spontaneous ME dataset with the largest current ME data scale, called DFME (Dynamic Facial Micro-expressions), which includes 7,526 well-labeled ME videos induced by 671 participants and annotated by more than 20 annotators throughout three years. Afterwards, we adopt four classical spatiotemporal feature learning models on DFME to perform MER experiments to objectively verify the validity of DFME dataset. In addition, we explore different solutions to the class imbalance and key-frame sequence sampling problems in dynamic MER respectively on DFME, so as to provide a valuable reference for future research. The comprehensive experimental results show that our DFME dataset can facilitate the research of automatic MER, and provide a new benchmark for MER. DFME will be published via https://mea-lab-421.github.io.
translated by 谷歌翻译
Face Anti-spoofing (FAS) is essential to secure face recognition systems from various physical attacks. However, recent research generally focuses on short-distance applications (i.e., phone unlocking) while lacking consideration of long-distance scenes (i.e., surveillance security checks). In order to promote relevant research and fill this gap in the community, we collect a large-scale Surveillance High-Fidelity Mask (SuHiFiMask) dataset captured under 40 surveillance scenes, which has 101 subjects from different age groups with 232 3D attacks (high-fidelity masks), 200 2D attacks (posters, portraits, and screens), and 2 adversarial attacks. In this scene, low image resolution and noise interference are new challenges faced in surveillance FAS. Together with the SuHiFiMask dataset, we propose a Contrastive Quality-Invariance Learning (CQIL) network to alleviate the performance degradation caused by image quality from three aspects: (1) An Image Quality Variable module (IQV) is introduced to recover image information associated with discrimination by combining the super-resolution network. (2) Using generated sample pairs to simulate quality variance distributions to help contrastive learning strategies obtain robust feature representation under quality variation. (3) A Separate Quality Network (SQN) is designed to learn discriminative features independent of image quality. Finally, a large number of experiments verify the quality of the SuHiFiMask dataset and the superiority of the proposed CQIL.
translated by 谷歌翻译
When using LiDAR semantic segmentation models for safety-critical applications such as autonomous driving, it is essential to understand and improve their robustness with respect to a large range of LiDAR corruptions. In this paper, we aim to comprehensively analyze the robustness of LiDAR semantic segmentation models under various corruptions. To rigorously evaluate the robustness and generalizability of current approaches, we propose a new benchmark called SemanticKITTI-C, which features 16 out-of-domain LiDAR corruptions in three groups, namely adverse weather, measurement noise and cross-device discrepancy. Then, we systematically investigate 11 LiDAR semantic segmentation models, especially spanning different input representations (e.g., point clouds, voxels, projected images, and etc.), network architectures and training schemes. Through this study, we obtain two insights: 1) We find out that the input representation plays a crucial role in robustness. Specifically, under specific corruptions, different representations perform variously. 2) Although state-of-the-art methods on LiDAR semantic segmentation achieve promising results on clean data, they are less robust when dealing with noisy data. Finally, based on the above observations, we design a robust LiDAR segmentation model (RLSeg) which greatly boosts the robustness with simple but effective modifications. It is promising that our benchmark, comprehensive analysis, and observations can boost future research in robust LiDAR semantic segmentation for safety-critical applications.
translated by 谷歌翻译
Panoptic Part Segmentation (PPS) unifies panoptic segmentation and part segmentation into one task. Previous works utilize separated approaches to handle thing, stuff, and part predictions without shared computation and task association. We aim to unify these tasks at the architectural level, designing the first end-to-end unified framework named Panoptic-PartFormer. Moreover, we find the previous metric PartPQ biases to PQ. To handle both issues, we make the following contributions: Firstly, we design a meta-architecture that decouples part feature and things/stuff feature, respectively. We model things, stuff, and parts as object queries and directly learn to optimize all three forms of prediction as a unified mask prediction and classification problem. We term our model as Panoptic-PartFormer. Secondly, we propose a new metric Part-Whole Quality (PWQ) to better measure such task from both pixel-region and part-whole perspectives. It can also decouple the error for part segmentation and panoptic segmentation. Thirdly, inspired by Mask2Former, based on our meta-architecture, we propose Panoptic-PartFormer++ and design a new part-whole cross attention scheme to further boost part segmentation qualities. We design a new part-whole interaction method using masked cross attention. Finally, the extensive ablation studies and analysis demonstrate the effectiveness of both Panoptic-PartFormer and Panoptic-PartFormer++. Compared with previous Panoptic-PartFormer, our Panoptic-PartFormer++ achieves 2% PartPQ and 3% PWQ improvements on the Cityscapes PPS dataset and 5% PartPQ on the Pascal Context PPS dataset. On both datasets, Panoptic-PartFormer++ achieves new state-of-the-art results with a significant cost drop of 70% on GFlops and 50% on parameters. Our models can serve as a strong baseline and aid future research in PPS. Code will be available.
translated by 谷歌翻译