The Coronavirus Disease 2019 (COVID-19) has spread globally and caused serious damage. Chest X-ray images are widely used for COVID-19 diagnosis and the Artificial Intelligence method can increase efficiency and accuracy. In the Challenge of Chest XR COVID-19 detection in Ethics and Explainability for Responsible Data Science (EE-RDS) conference 2021, we proposed a method that combined Swin Transformer and Transformer in Transformer to classify chest X-ray images as three classes: COVID-19, Pneumonia, and Normal (healthy) and achieved 0.9475 accuracies on the test set.
translated by 谷歌翻译
在过去的十年中,多任务学习方法在解决全景驱动感知问题方面取得了令人鼓舞的结果,提供了高精度和高效效率。在为实时自动驾驶系统设计网络时,它已成为流行的范式,在该系统中,计算资源受到限制。本文提出了一个有效,有效的多任务学习网络,以同时执行交通对象检测,可驱动的道路区域细分和车道检测的任务。我们的模型以挑战性的BDD100K数据集的准确性和速度来实现新的最先进(SOTA)性能。特别是,与先前的SOTA模型相比,推理时间减少了一半。代码将在不久的将来发布。
translated by 谷歌翻译
数据驱动的机器学习方法有可能显着加速材料设计的速率,而不是传统的人类指导方法。这些方法将有助于识别或在生成模型的情况下,甚至可以创建具有一组指定功能特性的新型材料结构,然后在实验室中合成或隔离。对于晶体结构的产生,关键的瓶颈在于为机器学习模型开发合适的原子结构指纹或表示,类似于分子生成中使用的基于图或微笑的表示。但是,找到对翻译,旋转和排列不变的数据有效表示,而笛卡尔原子坐标仍然是可逆的,仍然是一个持续的挑战。在这里,我们通过采用具有所需的不变的现有的不可糊化表示并开发算法来通过使用自动分化的基于梯度的优化来重建原子坐标,从而提出了一种替代方法。然后,可以将其与生成机器学习模型耦合,该模型在表示空间内生成新材料,而不是在数据范围内的笛卡尔空间中生成新材料。在这项工作中,我们使用以原子为中心的对称函数来实现这种端到端的结构生成方法,作为表示和条件变化自动编码器作为生成模型。我们能够成功地生成亚纳米PT纳米颗粒的新颖和有效的原子结构,作为概念证明。此外,该方法可以很容易地扩展到任何合适的结构表示形式,从而为基于结构的生成提供了强大的,可推广的框架。
translated by 谷歌翻译
本文介绍了我们针对六个基本表达分类的方法论情感行为分析(ABAW)竞赛2022年的曲目。从人为生成的数据中表达并概括为真实数据。由于合成数据和面部动作单元(AU)的客观性的模棱两可,我们求助于AU信息以提高性能,并做出如下贡献。首先,为了使模型适应合成场景,我们使用了预先训练的大规模面部识别数据中的知识。其次,我们提出了一个概念上的框架,称为Au-persuped卷积视觉变压器(AU-CVT),该框架通过与AU或Pseudo Au标签共同训练辅助数据集来显然改善了FER的性能。我们的AU-CVT在验证集上的F1分数为0.6863美元,准确性为$ 0.7433 $。我们工作的源代码在线公开可用:https://github.com/msy1412/abaw4
translated by 谷歌翻译
在恶劣天气下的图像修复是一项艰巨的任务。过去的大多数作品都集中在消除图像中的雨水和阴霾现象。但是,雪也是一种极为普遍的大气现象,它将严重影响高级计算机视觉任务的性能,例如对象检测和语义分割。最近,已经提出了一些用于降雪的方法,大多数方法直接将雪图像作为优化对象。但是,雪地点和形状的分布很复杂。因此,未能有效地检测雪花 /雪连胜将影响降雪并限制模型性能。为了解决这些问题,我们提出了一个雪地掩模的自适应残留网络(SMGARN)。具体而言,SMGARN由三个部分组成,即Mask-Net,Guidance-Fusion Network(GF-NET)和重建-NET。首先,我们构建了一个以自像素的注意(SA)和跨像素的注意(CA),以捕获雪花的特征并准确地定位了雪的位置,从而预测了准确的雪山。其次,预测的雪面被发送到专门设计的GF-NET中,以适应指导模型去除雪。最后,使用有效的重建网络来消除面纱效果并纠正图像以重建最终的无雪图像。广泛的实验表明,我们的SMGARN数值优于所有现有的降雪方法,并且重建的图像在视觉对比度上更清晰。所有代码都将可用。
translated by 谷歌翻译
基于伪标签的半监督学习(SSL)在原始数据利用率上取得了巨大的成功。但是,由于自我生成的人工标签中包含的噪声,其训练程序受到确认偏差的影响。此外,该模型的判断在具有广泛分布数据的现实应用程序中变得更加嘈杂。为了解决这个问题,我们提出了一种名为“班级意识的对比度半监督学习”(CCSSL)的通用方法,该方法是提高伪标签质量并增强现实环境中模型的稳健性的插手。我们的方法不是将现实世界数据视为一个联合集合,而是分别处理可靠的分布数据,并将其融合到下游任务中,并将其与图像对比度融合到下游任务中,以更好地泛化。此外,通过应用目标重新加权,我们成功地强调了清洁标签学习,并同时减少嘈杂的标签学习。尽管它很简单,但我们提出的CCSSL比标准数据集CIFAR100和STL10上的最新SSL方法具有显着的性能改进。在现实世界数据集Semi-Inat 2021上,我们将FixMatch提高了9.80%,并提高了3.18%。代码可用https://github.com/tencentyouturesearch/classification-spoomls。
translated by 谷歌翻译
Unsupervised domain adaptation (UDA) for semantic segmentation is a promising task freeing people from heavy annotation work. However, domain discrepancies in low-level image statistics and high-level contexts compromise the segmentation performance over the target domain. A key idea to tackle this problem is to perform both image-level and feature-level adaptation jointly. Unfortunately, there is a lack of such unified approaches for UDA tasks in the existing literature. This paper proposes a novel UDA pipeline for semantic segmentation that unifies image-level and feature-level adaptation. Concretely, for image-level domain shifts, we propose a global photometric alignment module and a global texture alignment module that align images in the source and target domains in terms of image-level properties. For feature-level domain shifts, we perform global manifold alignment by projecting pixel features from both domains onto the feature manifold of the source domain; and we further regularize category centers in the source domain through a category-oriented triplet loss and perform target domain consistency regularization over augmented target domain images. Experimental results demonstrate that our pipeline significantly outperforms previous methods. In the commonly tested GTA5$\rightarrow$Cityscapes task, our proposed method using Deeplab V3+ as the backbone surpasses previous SOTA by 8%, achieving 58.2% in mIoU.
translated by 谷歌翻译
Compressed videos often exhibit visually annoying artifacts, known as Perceivable Encoding Artifacts (PEAs), which dramatically degrade video visual quality. Subjective and objective measures capable of identifying and quantifying various types of PEAs are critical in improving visual quality. In this paper, we investigate the influence of four spatial PEAs (i.e. blurring, blocking, bleeding, and ringing) and two temporal PEAs (i.e. flickering and floating) on video quality. For spatial artifacts, we propose a visual saliency model with a low computational cost and higher consistency with human visual perception. In terms of temporal artifacts, self-attention based TimeSFormer is improved to detect temporal artifacts. Based on the six types of PEAs, a quality metric called Saliency-Aware Spatio-Temporal Artifacts Measurement (SSTAM) is proposed. Experimental results demonstrate that the proposed method outperforms state-of-the-art metrics. We believe that SSTAM will be beneficial for optimizing video coding techniques.
translated by 谷歌翻译
Image Virtual try-on aims at replacing the cloth on a personal image with a garment image (in-shop clothes), which has attracted increasing attention from the multimedia and computer vision communities. Prior methods successfully preserve the character of clothing images, however, occlusion remains a pernicious effect for realistic virtual try-on. In this work, we first present a comprehensive analysis of the occlusions and categorize them into two aspects: i) Inherent-Occlusion: the ghost of the former cloth still exists in the try-on image; ii) Acquired-Occlusion: the target cloth warps to the unreasonable body part. Based on the in-depth analysis, we find that the occlusions can be simulated by a novel semantically-guided mixup module, which can generate semantic-specific occluded images that work together with the try-on images to facilitate training a de-occlusion try-on (DOC-VTON) framework. Specifically, DOC-VTON first conducts a sharpened semantic parsing on the try-on person. Aided by semantics guidance and pose prior, various complexities of texture are selectively blending with human parts in a copy-and-paste manner. Then, the Generative Module (GM) is utilized to take charge of synthesizing the final try-on image and learning to de-occlusion jointly. In comparison to the state-of-the-art methods, DOC-VTON achieves better perceptual quality by reducing occlusion effects.
translated by 谷歌翻译
Panoptic Part Segmentation (PPS) unifies panoptic segmentation and part segmentation into one task. Previous works utilize separated approaches to handle thing, stuff, and part predictions without shared computation and task association. We aim to unify these tasks at the architectural level, designing the first end-to-end unified framework named Panoptic-PartFormer. Moreover, we find the previous metric PartPQ biases to PQ. To handle both issues, we make the following contributions: Firstly, we design a meta-architecture that decouples part feature and things/stuff feature, respectively. We model things, stuff, and parts as object queries and directly learn to optimize all three forms of prediction as a unified mask prediction and classification problem. We term our model as Panoptic-PartFormer. Secondly, we propose a new metric Part-Whole Quality (PWQ) to better measure such task from both pixel-region and part-whole perspectives. It can also decouple the error for part segmentation and panoptic segmentation. Thirdly, inspired by Mask2Former, based on our meta-architecture, we propose Panoptic-PartFormer++ and design a new part-whole cross attention scheme to further boost part segmentation qualities. We design a new part-whole interaction method using masked cross attention. Finally, the extensive ablation studies and analysis demonstrate the effectiveness of both Panoptic-PartFormer and Panoptic-PartFormer++. Compared with previous Panoptic-PartFormer, our Panoptic-PartFormer++ achieves 2% PartPQ and 3% PWQ improvements on the Cityscapes PPS dataset and 5% PartPQ on the Pascal Context PPS dataset. On both datasets, Panoptic-PartFormer++ achieves new state-of-the-art results with a significant cost drop of 70% on GFlops and 50% on parameters. Our models can serve as a strong baseline and aid future research in PPS. Code will be available.
translated by 谷歌翻译