Diabetic Retinopathy (DR) is considered one of the primary concerns due to its effect on vision loss among most people with diabetes globally. The severity of DR is mostly comprehended manually by ophthalmologists from fundus photography-based retina images. This paper deals with an automated understanding of the severity stages of DR. In the literature, researchers have focused on this automation using traditional machine learning-based algorithms and convolutional architectures. However, the past works hardly focused on essential parts of the retinal image to improve the model performance. In this paper, we adopt transformer-based learning models to capture the crucial features of retinal images to understand DR severity better. We work with ensembling image transformers, where we adopt four models, namely ViT (Vision Transformer), BEiT (Bidirectional Encoder representation for image Transformer), CaiT (Class-Attention in Image Transformers), and DeiT (Data efficient image Transformers), to infer the degree of DR severity from fundus photographs. For experiments, we used the publicly available APTOS-2019 blindness detection dataset, where the performances of the transformer-based models were quite encouraging.
translated by 谷歌翻译
Recently, neural networks purely based on attention were shown to address image understanding tasks such as image classification. These highperforming vision transformers are pre-trained with hundreds of millions of images using a large infrastructure, thereby limiting their adoption.In this work, we produce competitive convolutionfree transformers trained on ImageNet only using a single computer in less than 3 days. Our reference vision transformer (86M parameters) achieves top-1 accuracy of 83.1% (single-crop) on ImageNet with no external data.We also introduce a teacher-student strategy specific to transformers. It relies on a distillation token ensuring that the student learns from the teacher through attention, typically from a convnet teacher. The learned transformers are competitive (85.2% top-1 acc.) with the state of the art on ImageNet, and similarly when transferred to other tasks. We will share our code and models.
translated by 谷歌翻译
Recently, neural networks purely based on attention were shown to address image understanding tasks such as image classification. These highperforming vision transformers are pre-trained with hundreds of millions of images using a large infrastructure, thereby limiting their adoption.In this work, we produce competitive convolution-free transformers by training on Imagenet only. We train them on a single computer in less than 3 days. Our reference vision transformer (86M parameters) achieves top-1 accuracy of 83.1% (single-crop) on ImageNet with no external data.More importantly, we introduce a teacher-student strategy specific to transformers. It relies on a distillation token ensuring that the student learns from the teacher through attention. We show the interest of this token-based distillation, especially when using a convnet as a teacher. This leads us to report results competitive with convnets for both Imagenet (where we obtain up to 85.2% accuracy) and when transferring to other tasks. We share our code and models.
translated by 谷歌翻译
While the Transformer architecture has become the de-facto standard for natural language processing tasks, its applications to computer vision remain limited. In vision, attention is either applied in conjunction with convolutional networks, or used to replace certain components of convolutional networks while keeping their overall structure in place. We show that this reliance on CNNs is not necessary and a pure transformer applied directly to sequences of image patches can perform very well on image classification tasks. When pre-trained on large amounts of data and transferred to multiple mid-sized or small image recognition benchmarks (ImageNet, CIFAR-100, VTAB, etc.), Vision Transformer (ViT) attains excellent results compared to state-of-the-art convolutional networks while requiring substantially fewer computational resources to train. 1
translated by 谷歌翻译
我们的粮食安全建立在土壤的基础上。如果土壤不健康,农民将无法用纤维,食物和燃料喂养我们。准确预测土壤的类型有助于规划土壤的使用,从而提高生产率。这项研究采用了最先进的视觉变压器,并与SVM,Alexnet,Resnet和CNN等不同模型进行了比较。此外,这项研究还着重于区分不同的视觉变压器体系结构。对于土壤类型的分类,数据集由4种不同类型的土壤样品组成,例如冲积,红色,黑色和粘土。 Visual Transformer模型在测试和测试时达到98.13%的训练和93.62%的范围,在测试和训练精度方面都优于其他模型。视觉变压器的性能超过了其他模型的性能至少2%。因此,新颖的视觉变压器可用于计算机视觉任务,包括土壤分类。
translated by 谷歌翻译
变形金刚占据了自然语言处理领域,最近影响了计算机视觉区域。在医学图像分析领域中,变压器也已成功应用于全栈临床应用,包括图像合成/重建,注册,分割,检测和诊断。我们的论文旨在促进变压器在医学图像分析领域的认识和应用。具体而言,我们首先概述了内置在变压器和其他基本组件中的注意机制的核心概念。其次,我们回顾了针对医疗图像应用程序量身定制的各种变压器体系结构,并讨论其局限性。在这篇综述中,我们调查了围绕在不同学习范式中使用变压器,提高模型效率及其与其他技术的耦合的关键挑战。我们希望这篇评论可以为读者提供医学图像分析领域的读者的全面图片。
translated by 谷歌翻译
变形金刚在自然语言处理方面取得了巨大的成功。由于变压器中自我发挥机制的强大能力,研究人员为各种计算机视觉任务(例如图像识别,对象检测,图像分割,姿势估计和3D重建)开发了视觉变压器。本文介绍了有关视觉变形金刚的不同建筑设计和培训技巧(包括自我监督的学习)文献的全面概述。我们的目标是为开放研究机会提供系统的审查。
translated by 谷歌翻译
由于独特的驾驶特征,人类驾驶员具有独特的驾驶技术,知识和情感。驾驶员嗜睡一直是一个严重的问题,危害道路安全。因此,必须设计有效的嗜睡检测算法以绕过道路事故。杂项研究工作已经解决了检测异常的人类驾驶员行为的问题,以通过计算机视觉技术检查驾驶员和汽车动力学的正面面孔。尽管如此,常规方法仍无法捕获复杂的驾驶员行为特征。但是,以深度学习体系结构的起源,还进行了大量研究,以分析和识别使用神经网络算法的驾驶员的嗜睡。本文介绍了一个基于视觉变形金刚和Yolov5架构的新颖框架,以实现驾驶员嗜睡的识别。提出了定制的Yolov5预训练的结构,以提取面部提取,目的是提取感兴趣的区域(ROI)。由于以前的体系结构的局限性,本文引入了视觉变压器进行二进制图像分类,该二进制图像分类在公共数据集UTA-RLDD上经过训练和验证。该模型分别达到了96.2 \%和97.4 \%的培训和验证精度。为了进行进一步的评估,在各种光明情况下的39名参与者的自定义数据集上测试了拟议的框架,并获得了95.5 \%的准确性。进行的实验揭示了我们在智能运输系统中实用应用框架的重要潜力。
translated by 谷歌翻译
执法和城市安全受到监视系统中的暴力事件的严重影响。尽管现代(智能)相机广泛可用且负担得起,但在大多数情况下,这种技术解决方案无能为力。此外,监测CCTV记录的人员经常显示出迟来的反应,从而导致对人和财产的灾难。因此,对迅速行动的暴力自动检测至关重要。拟议的解决方案使用了一种新颖的端到端深度学习视频视觉变压器(Vivit),可以在视频序列中熟练地辨别战斗,敌对运动和暴力事件。该研究提出了利用数据增强策略来克服较弱的电感偏见的缺点,同时在较小的培训数据集中训练视觉变压器。评估的结果随后可以发送给当地有关当局,可以分析捕获的视频。与最先进的(SOTA)相比,所提出的方法在某些具有挑战性的基准数据集上实现了吉祥的性能。
translated by 谷歌翻译
Transformers have been recently adapted for large scale image classification, achieving high scores shaking up the long supremacy of convolutional neural networks. However the optimization of image transformers has been little studied so far. In this work, we build and optimize deeper transformer networks for image classification. In particular, we investigate the interplay of architecture and optimization of such dedicated transformers. We make two transformers architecture changes that significantly improve the accuracy of deep transformers. This leads us to produce models whose performance does not saturate early with more depth, for instance we obtain 86.5% top-1 accuracy on Imagenet when training with no external data, we thus attain the current SOTA with less FLOPs and parameters. Moreover, our best model establishes the new state of the art on Imagenet with Reassessed labels and Imagenet-V2 / match frequency, in the setting with no additional training data. We share our code and models 1 .
translated by 谷歌翻译
变压器是一种基于关注的编码器解码器架构,彻底改变了自然语言处理领域。灵感来自这一重大成就,最近在将变形式架构调整到计算机视觉(CV)领域的一些开创性作品,这已经证明了他们对各种简历任务的有效性。依靠竞争力的建模能力,与现代卷积神经网络相比在本文中,我们已经为三百不同的视觉变压器进行了全面的审查,用于三个基本的CV任务(分类,检测和分割),提出了根据其动机,结构和使用情况组织这些方法的分类。 。由于培训设置和面向任务的差异,我们还在不同的配置上进行了评估了这些方法,以便于易于和直观的比较而不是各种基准。此外,我们已经揭示了一系列必不可少的,但可能使变压器能够从众多架构中脱颖而出,例如松弛的高级语义嵌入,以弥合视觉和顺序变压器之间的差距。最后,提出了三个未来的未来研究方向进行进一步投资。
translated by 谷歌翻译
随着变压器作为语言处理的标准及其在计算机视觉方面的进步,参数大小和培训数据的数量相应地增长。许多人开始相信,因此,变形金刚不适合少量数据。这种趋势引起了人们的关注,例如:某些科学领域中数据的可用性有限,并且排除了该领域研究资源有限的人。在本文中,我们旨在通过引入紧凑型变压器来提出一种小规模学习的方法。我们首次表明,具有正确的尺寸,卷积令牌化,变压器可以避免在小数据集上过度拟合和优于最先进的CNN。我们的模型在模型大小方面具有灵活性,并且在获得竞争成果的同时,参数可能仅为0.28亿。当在CIFAR-10上训练Cifar-10,只有370万参数训练时,我们的最佳模型可以达到98%的准确性,这是与以前的基于变形金刚的模型相比,数据效率的显着提高,比其他变压器小于10倍,并且是15%的大小。在实现类似性能的同时,重新NET50。 CCT还表现优于许多基于CNN的现代方法,甚至超过一些基于NAS的方法。此外,我们在Flowers-102上获得了新的SOTA,具有99.76%的TOP-1准确性,并改善了Imagenet上现有基线(82.71%精度,具有29%的VIT参数)以及NLP任务。我们针对变压器的简单而紧凑的设计使它们更可行,可以为那些计算资源和/或处理小型数据集的人学习,同时扩展了在数据高效变压器中的现有研究工作。我们的代码和预培训模型可在https://github.com/shi-labs/compact-transformers上公开获得。
translated by 谷歌翻译
文本识别是文档数字化的长期研究问题。现有的方法通常是基于CNN构建的,以用于图像理解,并为Char-Level文本生成而建立RNN。此外,通常需要另一种语言模型来提高整体准确性作为后处理步骤。在本文中,我们提出了一种使用预训练的图像变压器和文本变压器模型(即Trocr)提出的端到端文本识别方法,该模型利用了变压器体系结构,以实现图像理解和文字级级文本生成。TROR模型很简单,但有效,可以通过大规模合成数据进行预训练,并通过人体标记的数据集进行微调。实验表明,TROR模型的表现优于印刷,手写和场景文本识别任务上的当前最新模型。Trocr模型和代码可在\ url {https://aka.ms/trocr}上公开获得。
translated by 谷歌翻译
Facial Expression Recognition (FER) in the wild is an extremely challenging task. Recently, some Vision Transformers (ViT) have been explored for FER, but most of them perform inferiorly compared to Convolutional Neural Networks (CNN). This is mainly because the new proposed modules are difficult to converge well from scratch due to lacking inductive bias and easy to focus on the occlusion and noisy areas. TransFER, a representative transformer-based method for FER, alleviates this with multi-branch attention dropping but brings excessive computations. On the contrary, we present two attentive pooling (AP) modules to pool noisy features directly. The AP modules include Attentive Patch Pooling (APP) and Attentive Token Pooling (ATP). They aim to guide the model to emphasize the most discriminative features while reducing the impacts of less relevant features. The proposed APP is employed to select the most informative patches on CNN features, and ATP discards unimportant tokens in ViT. Being simple to implement and without learnable parameters, the APP and ATP intuitively reduce the computational cost while boosting the performance by ONLY pursuing the most discriminative features. Qualitative results demonstrate the motivations and effectiveness of our attentive poolings. Besides, quantitative results on six in-the-wild datasets outperform other state-of-the-art methods.
translated by 谷歌翻译
根据诊断各种疾病的胸部X射线图像的可观增长,以及收集广泛的数据集,使用深神经网络进行了自动诊断程序,已经占据了专家的思想。计算机视觉中的大多数可用方法都使用CNN主链来获得分类问题的高精度。然而,最近的研究表明,在NLP中成为事实上方法的变压器也可以优于许多基于CNN的模型。本文提出了一个基于SWIN变压器的多标签分类深模型,作为实现最新诊断分类的骨干。它利用了头部体系结构来利用多层感知器(也称为MLP)。我们评估了我们的模型,该模型称为“ Chest X-Ray14”,最广泛,最大的X射线数据集之一,该数据集由30,000多名14例著名胸部疾病的患者组成100,000多个额叶/背景图像。我们的模型已经用几个数量的MLP层用于头部设置,每个模型都在所有类别上都达到了竞争性的AUC分数。胸部X射线14的全面实验表明,与以前的SOTA平均AUC为0.799相比,三层头的平均AUC得分为0.810,其平均AUC得分为0.810。我们建议对现有方法进行公平基准测试的实验设置,该设置可以用作未来研究的基础。最后,我们通过确认所提出的方法参与胸部的病理相关区域,从而跟进了结果。
translated by 谷歌翻译
农民常规施用氮气(N)肥料以增加作物产量。目前,农民经常在某些位置或时间点上过度应用N肥料,因为它们没有高分辨率作物N状态数据。 N用效率可以很低,剩下的N损失环境,导致生产成本高,环境污染。准确和及时估计作物中的N状况至关重要,从而提高种植系统的经济和环境可持续性。基于组织分析的常规方法在实验室中估算植物中的N个状态是耗时和破坏性的。遥感和机器学习的最新进展表明了以非破坏性方式解决上述挑战的承诺。我们提出了一种新的深度学习框架:一种基于频道空间关注的视觉变压器(CSVT),用于估计从麦田中从UAV收集的大图像的作物N状态。与现有的作品不同,所提出的CSVT引入了通道注意力块(CAB)和空间交互块(SIB),其允许捕获来自UAV数字空中图像的空间和通道功能的非线性特性,以获得准确的N状态预测在小麦作物。此外,由于获得标记的数据是耗时且昂贵的,因此引入了本地到全局自我监督的学习,以预先培训CSVT,具有广泛的未标记数据。建议的CSVT与最先进的模型进行了比较,在测试和独立数据集上进行测试和验证。该方法实现了高精度(0.96),具有良好的普遍性和对小麦N状况估算的再现性。
translated by 谷歌翻译
由于其最近在减少监督学习的差距方面取得了成功,自我监督的学习方法正在增加计算机愿景的牵引力。在自然语言处理(NLP)中,自我监督的学习和变形金刚已经是选择的方法。最近的文献表明,变压器也在计算机愿景中越来越受欢迎。到目前为止,当使用大规模监督数据或某种共同监督时,视觉变压器已被证明可以很好地工作。在教师网络方面。这些监督的普试视觉变压器在下游任务中实现了非常好的变化,变化最小。在这项工作中,我们调查自我监督学习的预用图像/视觉变压器,然后使用它们进行下游分类任务的优点。我们提出了自我监督的视觉变压器(坐在)并讨论了几种自我监督的培训机制,以获得借口模型。静坐的架构灵活性允许我们将其用作自动统计器,并无缝地使用多个自我监控任务。我们表明,可以在小规模数据集上进行预训练,以便在小型数据集上进行下游分类任务,包括几千个图像而不是数百万的图像。使用公共协议对所提出的方法进行评估标准数据集。结果展示了变压器的强度及其对自我监督学习的适用性。我们通过大边缘表现出现有的自我监督学习方法。我们还观察到坐着很好,很少有镜头学习,并且还表明它通过简单地训练从坐的学到的学习功能的线性分类器来学习有用的表示。预先训练,FineTuning和评估代码将在以下:https://github.com/sara-ahmed/sit。
translated by 谷歌翻译
Transformer is a new kind of neural architecture which encodes the input data as powerful features via the attention mechanism. Basically, the visual transformers first divide the input images into several local patches and then calculate both representations and their relationship. Since natural images are of high complexity with abundant detail and color information, the granularity of the patch dividing is not fine enough for excavating features of objects in different scales and locations. In this paper, we point out that the attention inside these local patches are also essential for building visual transformers with high performance and we explore a new architecture, namely, Transformer iN Transformer (TNT). Specifically, we regard the local patches (e.g., 16×16) as "visual sentences" and present to further divide them into smaller patches (e.g., 4×4) as "visual words". The attention of each word will be calculated with other words in the given visual sentence with negligible computational costs. Features of both words and sentences will be aggregated to enhance the representation ability. Experiments on several benchmarks demonstrate the effectiveness of the proposed TNT architecture, e.g., we achieve an 81.5% top-1 accuracy on the ImageNet, which is about 1.7% higher than that of the state-of-the-art visual transformer with similar computational cost.
translated by 谷歌翻译
早期发现视网膜疾病是预防患者部分或永久失明的最重要手段之一。在这项研究中,提出了一种新型的多标签分类系统,用于使用从各种来源收集的眼底图像来检测多种视网膜疾病。首先,使用许多公开可用的数据集来构建一个新的多标签视网膜疾病数据集,即梅里德数据集。接下来,应用了一系列后处理步骤,以确保图像数据的质量和数据集中存在的疾病范围。在眼底多标签疾病分类中,首次通过大量实验优化的基于变压器的模型用于图像分析和决策。进行了许多实验以优化所提出的系统的配置。结果表明,在疾病检测和疾病分类方面,该方法的性能比在同一任务上的最先进作品要好7.9%和8.1%。获得的结果进一步支持了基于变压器的架构在医学成像领域的潜在应用。
translated by 谷歌翻译
过去一年目睹了将变压器模块应用于视力问题的快速发展。虽然一些研究人员已经证明,基于变压器的模型享有有利的拟合数据能力,但仍然越来越多的证据,表明这些模型尤其在训练数据受到限制时遭受过度拟合。本文通过执行逐步操作来提供实证研究,逐步运输基于变压器的模型到基于卷积的模型。我们在过渡过程中获得的结果为改善视觉识别提供了有用的消息。基于这些观察,我们提出了一个名为VIRFormer的新架构,该体系结构从“视觉友好的变压器”中缩写。具有相同的计算复杂度,在想象集分类精度方面,VISFormer占据了基于变压器的基于卷积的模型,并且当模型复杂性较低或训练集较小时,优势变得更加重要。代码可在https://github.com/danczs/visformer中找到。
translated by 谷歌翻译