随着卷积神经网络(CNNS)的普及日益普及,最近的面部年龄估计的作品雇用这些网络作为骨干。然而,最先进的基于CNN的方法同样地治疗每个面部区域,从而完全忽略了一些可能包含富年龄信息的面部斑块的重要性。在本文中,我们提出了一种基于面部的年龄估计框架,称为关注的动态补丁融合(ADPF)。在ADPF中,实现了两个单独的CNN,即IpperenceNet和FusionNet。 EpperenceNet通过采用新的排名引导的多头混合注意力(RMHHA)机制来动态定位并排名特定年龄的补丁。 FusionNet使用发现的补丁以及面部图像来预测主题的年龄。由于提出的RMHA机制根据其重要性排名发现的补丁,因此FusionNet中的每个补丁的学习路径的长度与其携带的信息量成比例(较长,更重要的)。 ADPF还介绍了一种新颖的多样性损失,以指导IppectionNet的培训,并减少补丁中的重叠,以便发现多样化和重要的补丁。通过广泛的实验,我们表明我们所提出的框架优于几个年龄估计基准数据集的最先进的方法。
translated by 谷歌翻译
使用卷积神经网络,面部属性(例如,年龄和吸引力)估算性能得到了大大提高。然而,现有方法在培训目标和评估度量之间存在不一致,因此它们可能是次优。此外,这些方法始终采用具有大量参数的图像分类或面部识别模型,其携带昂贵的计算成本和存储开销。在本文中,我们首先分析了两种最新方法(排名CNN和DLDL)之间的基本关系,并表明排名方法实际上是隐含的学习标签分布。因此,该结果首先将两个现有的最新方法统一到DLDL框架中。其次,为了减轻不一致和降低资源消耗,我们设计了一种轻量级网络架构,并提出了一个统一的框架,可以共同学习面部属性分发和回归属性值。在面部年龄和吸引力估算任务中都证明了我们的方法的有效性。我们的方法使用单一模型实现新的最先进的结果,使用36美元\倍,参数减少3美元,在面部年龄/吸引力估算上的推动速度为3美元。此外,即使参数的数量进一步降低到0.9m(3.8MB磁盘存储),我们的方法也可以实现与最先进的结果。
translated by 谷歌翻译
Facial Expression Recognition (FER) in the wild is an extremely challenging task. Recently, some Vision Transformers (ViT) have been explored for FER, but most of them perform inferiorly compared to Convolutional Neural Networks (CNN). This is mainly because the new proposed modules are difficult to converge well from scratch due to lacking inductive bias and easy to focus on the occlusion and noisy areas. TransFER, a representative transformer-based method for FER, alleviates this with multi-branch attention dropping but brings excessive computations. On the contrary, we present two attentive pooling (AP) modules to pool noisy features directly. The AP modules include Attentive Patch Pooling (APP) and Attentive Token Pooling (ATP). They aim to guide the model to emphasize the most discriminative features while reducing the impacts of less relevant features. The proposed APP is employed to select the most informative patches on CNN features, and ATP discards unimportant tokens in ViT. Being simple to implement and without learnable parameters, the APP and ATP intuitively reduce the computational cost while boosting the performance by ONLY pursuing the most discriminative features. Qualitative results demonstrate the motivations and effectiveness of our attentive poolings. Besides, quantitative results on six in-the-wild datasets outperform other state-of-the-art methods.
translated by 谷歌翻译
不同的人以不同的方式衰老。为每个人学习个性化的年龄估计器是年龄估计的有前途的方向,因为它可以更好地建模衰老过程的个性化。但是,由于高级要求,大多数现有的个性化方法都缺乏大规模数据集:身份标签和足够的样本使每个人形成长期衰老模式。在本文中,我们旨在学习没有上述要求的个性化年龄估计量,并提出一种元学习方法,称为年龄估计。与大多数现有的个性化方法不同,这些方法学习了培训集中每个人的个性化估计器的参数,我们的方法将映射从身份信息到年龄估计器参数学习。具体而言,我们引入了个性化的估算器元学习器,该估计量元学习器将身份功能作为输入并输出定制估算器的参数。这样,我们的方法就可以学习元知识而没有上述要求,并无缝将学习的元知识转移到测试集中,这使我们能够利用现有的大规模年龄数据集,而无需任何其他注释。在包括Morph II,Chalearn Lap 2015和Chalearn Lap 2016数据库在内的三个基准数据集上进行的大量实验结果表明,我们的元大大提高了现有的个性化方法的性能,并优于最先进的方法。
translated by 谷歌翻译
人类的情感认可是人工智能的积极研究领域,在过去几年中取得了实质性的进展。许多最近的作品主要关注面部区域以推断人类的情感,而周围的上下文信息没有有效地利用。在本文中,我们提出了一种新的深网络,有效地识别使用新的全球局部注意机制的人类情绪。我们的网络旨在独立地从两个面部和上下文区域提取特征,然后使用注意模块一起学习它们。以这种方式,面部和上下文信息都用于推断人类的情绪,从而增强分类器的歧视。密集实验表明,我们的方法超越了最近的最先进的方法,最近的情感数据集是公平的保证金。定性地,我们的全球局部注意力模块可以提取比以前的方法更有意义的注意图。我们网络的源代码和培训模型可在https://github.com/minhnhatvt/glamor-net上获得
translated by 谷歌翻译
近年来,已经产生了大量的视觉内容,并从许多领域共享,例如社交媒体平台,医学成像和机器人。这种丰富的内容创建和共享引入了新的挑战,特别是在寻找类似内容内容的图像检索(CBIR)-A的数据库中,即长期建立的研究区域,其中需要改进的效率和准确性来实时检索。人工智能在CBIR中取得了进展,并大大促进了实例搜索过程。在本调查中,我们审查了最近基于深度学习算法和技术开发的实例检索工作,通过深网络架构类型,深度功能,功能嵌入方法以及网络微调策略组织了调查。我们的调查考虑了各种各样的最新方法,在那里,我们识别里程碑工作,揭示各种方法之间的联系,并呈现常用的基准,评估结果,共同挑战,并提出未来的未来方向。
translated by 谷歌翻译
近年来,人群计数研究取得了重大进展。然而,随着人群中存在具有挑战性的规模变化和复杂的场景,传统的卷积网络和最近具有固定大小的变压器架构都不能良好地处理任务。为了解决这个问题,本文提出了一个场景 - 自适应关注网络,称为Saanet。首先,我们设计了可变形的变压器骨干内的可变形关注,从而了解具有可变形采样位置和动态注意力的自适应特征表示。然后,我们提出了多级特征融合和计数专注特征增强模块,以加强全局图像上下文下的特征表示。学习的陈述可以参加前景,并适应不同的人群。我们对四个具有挑战性的人群计数基准进行广泛的实验,表明我们的方法实现了最先进的性能。特别是,我们的方法目前在NWPU-Crowd基准的公共排行榜上排名第一。我们希望我们的方法可能是一个强大的基线,以支持人群计数的未来研究。源代码将被释放到社区。
translated by 谷歌翻译
In recent years, the Transformer architecture has shown its superiority in the video-based person re-identification task. Inspired by video representation learning, these methods mainly focus on designing modules to extract informative spatial and temporal features. However, they are still limited in extracting local attributes and global identity information, which are critical for the person re-identification task. In this paper, we propose a novel Multi-Stage Spatial-Temporal Aggregation Transformer (MSTAT) with two novel designed proxy embedding modules to address the above issue. Specifically, MSTAT consists of three stages to encode the attribute-associated, the identity-associated, and the attribute-identity-associated information from the video clips, respectively, achieving the holistic perception of the input person. We combine the outputs of all the stages for the final identification. In practice, to save the computational cost, the Spatial-Temporal Aggregation (STA) modules are first adopted in each stage to conduct the self-attention operations along the spatial and temporal dimensions separately. We further introduce the Attribute-Aware and Identity-Aware Proxy embedding modules (AAP and IAP) to extract the informative and discriminative feature representations at different stages. All of them are realized by employing newly designed self-attention operations with specific meanings. Moreover, temporal patch shuffling is also introduced to further improve the robustness of the model. Extensive experimental results demonstrate the effectiveness of the proposed modules in extracting the informative and discriminative information from the videos, and illustrate the MSTAT can achieve state-of-the-art accuracies on various standard benchmarks.
translated by 谷歌翻译
Due to object detection's close relationship with video analysis and image understanding, it has attracted much research attention in recent years. Traditional object detection methods are built on handcrafted features and shallow trainable architectures. Their performance easily stagnates by constructing complex ensembles which combine multiple low-level image features with high-level context from object detectors and scene classifiers. With the rapid development in deep learning, more powerful tools, which are able to learn semantic, high-level, deeper features, are introduced to address the problems existing in traditional architectures. These models behave differently in network architecture, training strategy and optimization function, etc. In this paper, we provide a review on deep learning based object detection frameworks. Our review begins with a brief introduction on the history of deep learning and its representative tool, namely Convolutional Neural Network (CNN). Then we focus on typical generic object detection architectures along with some modifications and useful tricks to improve detection performance further. As distinct specific detection tasks exhibit different characteristics, we also briefly survey several specific tasks, including salient object detection, face detection and pedestrian detection. Experimental analyses are also provided to compare various methods and draw some meaningful conclusions. Finally, several promising directions and tasks are provided to serve as guidelines for future work in both object detection and relevant neural network based learning systems.
translated by 谷歌翻译
先前的工作表明,使用顺序学习者学习面部不同组成部分的顺序可以在面部表达识别系统的性能中发挥重要作用。我们提出了Facetoponet,这是面部表达识别的端到端深层模型,它能够学习面部有效的树拓扑。然后,我们的模型遍历学习的树以生成序列,然后将其用于形成嵌入以喂养顺序学习者。设计的模型采用一个流进行学习结构,并为学习纹理提供一个流。结构流着重于面部地标的位置,而纹理流的主要重点是在地标周围的斑块上学习纹理信息。然后,我们通过利用有效的基于注意力的融合策略来融合两个流的输出。我们对四个大型内部面部表达数据集进行了广泛的实验 - 即Alltionnet,FER2013,ExpW和RAF-DB,以及一个实验室控制的数据集(CK+)来评估我们的方法。 Facetoponet在五个数据集中的三个数据集中达到了最新的性能,并在其他两个数据集中获得了竞争结果。我们还执行严格的消融和灵敏度实验,以评估模型中不同组件和参数的影响。最后,我们执行鲁棒性实验,并证明与该地区其他领先方法相比,Facetoponet对阻塞更具稳健性。
translated by 谷歌翻译
大多数现有的RGB-D突出物体检测方法利用卷积操作并构建复杂的交织融合结构来实现跨模型信息集成。卷积操作的固有局部连接将基于卷积的方法的性能进行了限制到天花板的性能。在这项工作中,我们从全球信息对齐和转换的角度重新思考此任务。具体地,所提出的方法(Transcmd)级联几个跨模型集成单元来构造基于自上而下的变换器的信息传播路径(TIPP)。 Transcmd将多尺度和多模态特征集成作为序列到序列上下文传播和内置于变压器上的更新过程。此外,考虑到二次复杂性W.R.T.输入令牌的数量,我们设计了具有可接受的计算成本的修补程序令牌重新嵌入策略(Ptre)。七个RGB-D SOD基准数据集上的实验结果表明,在配备TIPP时,简单的两流编码器 - 解码器框架可以超越最先进的基于CNN的方法。
translated by 谷歌翻译
Recent years witnessed the breakthrough of face recognition with deep convolutional neural networks. Dozens of papers in the field of FR are published every year. Some of them were applied in the industrial community and played an important role in human life such as device unlock, mobile payment, and so on. This paper provides an introduction to face recognition, including its history, pipeline, algorithms based on conventional manually designed features or deep learning, mainstream training, evaluation datasets, and related applications. We have analyzed and compared state-of-the-art works as many as possible, and also carefully designed a set of experiments to find the effect of backbone size and data distribution. This survey is a material of the tutorial named The Practical Face Recognition Technology in the Industrial World in the FG2023.
translated by 谷歌翻译
在本文中,我们基于任何卷积神经网络中中间注意图的弱监督生成机制,并更加直接地披露了注意模块的有效性,以充分利用其潜力。鉴于现有的神经网络配备了任意注意模块,我们介绍了一个元评论家网络,以评估主网络中注意力图的质量。由于我们设计的奖励的离散性,提出的学习方法是在强化学习环境中安排的,在此设置中,注意力参与者和经常性的批评家交替优化,以提供临时注意力表示的即时批评和修订,因此,由于深度强化的注意力学习而引起了人们的关注。 (Dreal)。它可以普遍应用于具有不同类型的注意模块的网络体系结构,并通过最大程度地提高每个单独注意模块产生的最终识别性能的相对增益来促进其表现能力,如类别和实例识别基准的广泛实验所证明的那样。
translated by 谷歌翻译
Face Restoration (FR) aims to restore High-Quality (HQ) faces from Low-Quality (LQ) input images, which is a domain-specific image restoration problem in the low-level computer vision area. The early face restoration methods mainly use statistic priors and degradation models, which are difficult to meet the requirements of real-world applications in practice. In recent years, face restoration has witnessed great progress after stepping into the deep learning era. However, there are few works to study deep learning-based face restoration methods systematically. Thus, this paper comprehensively surveys recent advances in deep learning techniques for face restoration. Specifically, we first summarize different problem formulations and analyze the characteristic of the face image. Second, we discuss the challenges of face restoration. Concerning these challenges, we present a comprehensive review of existing FR methods, including prior based methods and deep learning-based methods. Then, we explore developed techniques in the task of FR covering network architectures, loss functions, and benchmark datasets. We also conduct a systematic benchmark evaluation on representative methods. Finally, we discuss future directions, including network designs, metrics, benchmark datasets, applications,etc. We also provide an open-source repository for all the discussed methods, which is available at https://github.com/TaoWangzj/Awesome-Face-Restoration.
translated by 谷歌翻译
Facial attractiveness prediction (FAP) aims to assess the facial attractiveness automatically based on human aesthetic perception. Previous methods using deep convolutional neural networks have boosted the performance, but their giant models lead to a deficiency in flexibility. Besides, most of them fail to take full advantage of the dataset. In this paper, we present a novel end-to-end FAP approach integrating dual label distribution and lightweight design. To make the best use of the dataset, the manual ratings, attractiveness score, and standard deviation are aggregated explicitly to construct a dual label distribution, including the attractiveness distribution and the rating distribution. Such distributions, as well as the attractiveness score, are optimized under a joint learning framework based on the label distribution learning (LDL) paradigm. As for the lightweight design, the data processing is simplified to minimum, and MobileNetV2 is selected as our backbone. Extensive experiments are conducted on two benchmark datasets, where our approach achieves promising results and succeeds in striking a balance between performance and efficiency. Ablation studies demonstrate that our delicately designed learning modules are indispensable and correlated. Additionally, the visualization indicates that our approach is capable of perceiving facial attractiveness and capturing attractive facial regions to facilitate semantic predictions.
translated by 谷歌翻译
计算机视觉任务可以从估计突出物区域和这些对象区域之间的相互作用中受益。识别对象区域涉及利用预借鉴模型来执行对象检测,对象分割和/或对象姿势估计。但是,由于以下原因,在实践中不可行:1)预用模型的训练数据集的对象类别可能不会涵盖一般计算机视觉任务的所有对象类别,2)佩戴型模型训练数据集之间的域间隙并且目标任务的数据集可能会影响性能,3)预磨模模型中存在的偏差和方差可能泄漏到导致无意中偏置的目标模型的目标任务中。为了克服这些缺点,我们建议利用一系列视频帧捕获一组公共对象和它们之间的相互作用的公共基本原理,因此视频帧特征之间的共分割的概念可以用自动的能力装配模型专注于突出区域,以最终的方式提高潜在的任务的性能。在这方面,我们提出了一种称为“共分割激活模块”(COSAM)的通用模块,其可以被插入任何CNN,以促进基于CNN的任何CNN的概念在一系列视频帧特征中的关注。我们在三个基于视频的任务中展示Cosam的应用即1)基于视频的人Re-ID,2)视频字幕分类,并证明COSAM能够在视频帧中捕获突出区域,从而引导对于显着的性能改进以及可解释的关注图。
translated by 谷歌翻译
人类自然有效地在复杂的场景中找到突出区域。通过这种观察的动机,引入了计算机视觉中的注意力机制,目的是模仿人类视觉系统的这一方面。这种注意机制可以基于输入图像的特征被视为动态权重调整过程。注意机制在许多视觉任务中取得了巨大的成功,包括图像分类,对象检测,语义分割,视频理解,图像生成,3D视觉,多模态任务和自我监督的学习。在本调查中,我们对计算机愿景中的各种关注机制进行了全面的审查,并根据渠道注意,空间关注,暂时关注和分支注意力进行分类。相关的存储库https://github.com/menghaoguo/awesome-vision-tions致力于收集相关的工作。我们还建议了未来的注意机制研究方向。
translated by 谷歌翻译
多任务学习是基于深度学习的面部表情识别任务的有效学习策略。但是,当在不同任务之间传输信息时,大多数现有方法都考虑了特征选择,这可能在培训多任务网络时可能导致任务干扰。为了解决这个问题,我们提出了一种新颖的选择性特征共享方法,并建立一个用于面部表情识别和面部表达合成的多任务网络。该方法可以有效地转移不同任务之间的有益特征,同时过滤无用和有害信息。此外,我们采用了面部表情综合任务来扩大并平衡训练数据集以进一步提高所提出的方法的泛化能力。实验结果表明,该方法在那些常用的面部表情识别基准上实现了最先进的性能,这使其成为现实世界面部表情识别问题的潜在解决方案。
translated by 谷歌翻译
哥内克人Sentinel Imagery的纯粹卷的可用性为使用深度学习的大尺度创造了新的土地利用陆地覆盖(Lulc)映射的机会。虽然在这种大型数据集上培训是一个非琐碎的任务。在这项工作中,我们试验Lulc Image分类和基准不同最先进模型的Bigearthnet数据集,包括卷积神经网络,多层感知,视觉变压器,高效导通和宽残余网络(WRN)架构。我们的目标是利用分类准确性,培训时间和推理率。我们提出了一种基于用于网络深度,宽度和输入数据分辨率的WRNS复合缩放的高效导通的框架,以有效地训练和测试不同的模型设置。我们设计一种新颖的缩放WRN架构,增强了有效的通道注意力机制。我们提出的轻量级模型具有较小的培训参数,实现所有19个LULC类的平均F分类准确度达到4.5%,并且验证了我们使用的resnet50最先进的模型速度快两倍作为基线。我们提供超过50种培训的型号,以及我们在多个GPU节点上分布式培训的代码。
translated by 谷歌翻译
卷积神经网络(CNNS)成功地进行了压缩图像感测。然而,由于局部性和重量共享的归纳偏差,卷积操作证明了建模远程依赖性的内在限制。变压器,最初作为序列到序列模型设计,在捕获由于基于自我关注的架构而捕获的全局背景中,即使它可以配备有限的本地化能力。本文提出了一种混合框架,一个混合框架,其集成了从CNN提供的借用的优点以及变压器提供的全局上下文,以获得增强的表示学习。所提出的方法是由自适应采样和恢复组成的端到端压缩图像感测方法。在采样模块中,通过学习的采样矩阵测量图像逐块。在重建阶段,将测量投射到双杆中。一个是用于通过卷积建模邻域关系的CNN杆,另一个是用于采用全球自我关注机制的变压器杆。双分支结构是并发,并且本地特征和全局表示在不同的分辨率下融合,以最大化功能的互补性。此外,我们探索一个渐进的战略和基于窗口的变压器块,以降低参数和计算复杂性。实验结果表明了基于专用变压器的架构进行压缩感测的有效性,与不同数据集的最先进方法相比,实现了卓越的性能。
translated by 谷歌翻译