识别野外(RFIW)的家庭,作为数据挑战,与第16届IEEE国际自动面部和手势识别(FG)一起举行,是一种大规模的多轨视觉亲属识别评估。这是我们第五版RFIW,我们继续努力吸引学者,将专业人士,发布新工作和讨论前景。在本文中,我们总结了今年RFIW三个任务的提交:特别是,我们审查了亲属验证,三对象验证和家庭成员搜索和检索的结果。我们来看看RFIW问题,以及分享当前的努力,并为未来的未来方向提出建议。
translated by 谷歌翻译
亲属性验证是在两个人之间确定父子,兄弟姐妹或祖父母的关系,在社交媒体应用,法医调查,发现失踪的儿童和团聚家庭中都很重要。我们通过参加2021年在野外挑战中识别2021家庭来展示高质量的亲属验证,该家庭提供了该领域中最大的公共数据集。我们的方法是竞争中的前三名获奖条目之一。我们的专家和基础模型,Openai Codex撰写的模拟模型,培训了文本和代码。我们使用Codex来生成模型变体,并且还展示其能够生成特定关系的亲属验证任务的整个运行程序。
translated by 谷歌翻译
当前用于面部识别的模型(FR)中存在人口偏见。我们在野外(BFW)数据集中平衡的面孔是衡量种族和性别亚组偏见的代理,使一个人可以表征每个亚组的FR表现。当单个分数阈值确定样本对是真实还是冒名顶替者时,我们显示的结果是非最佳选择的。在亚组中,性能通常与全球平均水平有很大差异。因此,仅适用于与验证数据相匹配的人群的特定错误率。我们使用新的域适应性学习方案来减轻性能不平衡,以使用最先进的神经网络提取的面部特征。该技术平衡了性能,但也可以提高整体性能。该建议的好处是在面部特征中保留身份信息,同时减少其所包含的人口统计信息。人口统计学知识的去除阻止了潜在的未来偏见被注入决策。由于对个人的可用信息或推断,因此此删除可改善隐私。我们定性地探索这一点;我们还定量地表明,亚组分类器不再从提出的域适应方案的特征中学习。有关源代码和数据描述,请参见https://github.com/visionjo/facerec-bias-bfw。
translated by 谷歌翻译
面部演示攻击检测(PAD)由于欺骗欺骗性被广泛认可的脆弱性而受到越来越长。在2011年,2013年,2017年,2019年,2020年和2021年与主要生物识别和计算机视觉会议结合的八个国际竞赛中,在八个国际竞赛中评估了一系列国际竞争中的八种国际竞争中的艺术状态。研究界。在本章中,我们介绍了2019年的五个最新竞赛的设计和结果直到2021年。前两项挑战旨在评估近红外(NIR)和深度方式的多模态设置中面板的有效性。彩色相机数据,而最新的三个竞争专注于评估在传统彩色图像和视频上运行的面部垫算法的域和攻击型泛化能力。我们还讨论了从竞争中吸取的经验教训以及领域的未来挑战。
translated by 谷歌翻译
AI的最新进展,尤其是深度学习,导致创建新的现实合成媒体(视频,图像和音频)以及对现有媒体的操纵的创建显着增加,这导致了新术语的创建。 'deepfake'。基于英语和中文中的研究文献和资源,本文对Deepfake进行了全面的概述,涵盖了这一新兴概念的多个重要方面,包括1)不同的定义,2)常用的性能指标和标准以及3)与DeepFake相关的数据集,挑战,比赛和基准。此外,该论文还报告了2020年和2021年发表的12条与DeepFake相关的调查论文的元评估,不仅关注上述方面,而且集中在对关键挑战和建议的分析上。我们认为,就涵盖的各个方面而言,本文是对深层的最全面评论,也是第一个涵盖英语和中国文学和资源的文章。
translated by 谷歌翻译
In this paper, we investigate the problem of predictive confidence in face and kinship verification. Most existing face and kinship verification methods focus on accuracy performance while ignoring confidence estimation for their prediction results. However, confidence estimation is essential for modeling reliability in such high-risk tasks. To address this issue, we first introduce a novel yet simple confidence measure for face and kinship verification, which allows the verification models to transform the similarity score into a confidence score for a given face pair. We further propose a confidence-calibrated approach called angular scaling calibration (ASC). ASC is easy to implement and can be directly applied to existing face and kinship verification models without model modifications, yielding accuracy-preserving and confidence-calibrated probabilistic verification models. To the best of our knowledge, our approach is the first general confidence-calibrated solution to face and kinship verification in a modern context. We conduct extensive experiments on four widely used face and kinship verification datasets, and the results demonstrate the effectiveness of our approach.
translated by 谷歌翻译
面部变形攻击检测具有挑战性,并为面部验证系统带来了具体和严重的威胁。此类攻击的可靠检测机制已通过强大的跨数据库协议和未知的变形工具进行了测试,这仍然是一项研究挑战。本文提出了一个框架,遵循了几次射击学习方法,该方法使用三胞胎 - 硬性损坏共享基于暹罗网络的图像信息,以应对变形攻击检测并增强聚类分类过程。该网络比较了真正的或潜在的变形图像与变形和真正的面部图像的三胞胎。我们的结果表明,这个新的网络将数据点群集成,并将它们分配给类,以便在跨数据库方案中获得较低的相等错误率,仅共享来自未知数据库的小图像编号。几乎没有学习的学习有助于增强学习过程。使用FRGCV2训练并使用FERET和AMSL开放式数据库测试的跨数据库的实验结果将BPCer10使用RESNET50和5.50%的MobileNETV2从43%降低到4.91%。
translated by 谷歌翻译
随着面部生物识别技术的广泛采用,在自动面部识别(FR)应用中区分相同的双胞胎和非双胞胎外观相似的问题变得越来越重要。由于同卵双胞胎和外观相似的面部相似性很高,因此这些面对对面部识别工具表示最困难的病例。这项工作介绍了迄今为止汇编的最大的双胞胎数据集之一,以应对两个挑战:1)确定相同双胞胎和2)的面部相似性的基线度量和2)应用此相似性措施来确定多ppelgangers的影响或外观 - Alikes,关于大面部数据集的FR性能。面部相似性度量是通过深度卷积神经网络确定的。该网络经过量身定制的验证任务进行培训,旨在鼓励网络在嵌入空间中将高度相似的面对对组合在一起,并达到0.9799的测试AUC。所提出的网络为任何两个给定的面提供了定量相似性评分,并已应用于大规模面部数据集以识别相似的面对对。还执行了一个附加分析,该分析还将面部识别工具返回的比较分数以及提议网络返回的相似性分数。
translated by 谷歌翻译
The emergence of COVID-19 has had a global and profound impact, not only on society as a whole, but also on the lives of individuals. Various prevention measures were introduced around the world to limit the transmission of the disease, including face masks, mandates for social distancing and regular disinfection in public spaces, and the use of screening applications. These developments also triggered the need for novel and improved computer vision techniques capable of (i) providing support to the prevention measures through an automated analysis of visual data, on the one hand, and (ii) facilitating normal operation of existing vision-based services, such as biometric authentication schemes, on the other. Especially important here, are computer vision techniques that focus on the analysis of people and faces in visual data and have been affected the most by the partial occlusions introduced by the mandates for facial masks. Such computer vision based human analysis techniques include face and face-mask detection approaches, face recognition techniques, crowd counting solutions, age and expression estimation procedures, models for detecting face-hand interactions and many others, and have seen considerable attention over recent years. The goal of this survey is to provide an introduction to the problems induced by COVID-19 into such research and to present a comprehensive review of the work done in the computer vision based human analysis field. Particular attention is paid to the impact of facial masks on the performance of various methods and recent solutions to mitigate this problem. Additionally, a detailed review of existing datasets useful for the development and evaluation of methods for COVID-19 related applications is also provided. Finally, to help advance the field further, a discussion on the main open challenges and future research direction is given.
translated by 谷歌翻译
Developing and least developed countries face the dire challenge of ensuring that each child in their country receives required doses of vaccination, adequate nutrition and proper medication. International agencies such as UNICEF, WHO and WFP, among other organizations, strive to find innovative solutions to determine which child has received the benefits and which have not. Biometric recognition systems have been sought out to help solve this problem. To that end, this report establishes a baseline accuracy of a commercial contactless palmprint recognition system that may be deployed for recognizing children in the age group of one to five years old. On a database of contactless palmprint images of one thousand unique palms from 500 children, we establish SOTA authentication accuracy of 90.85% @ FAR of 0.01%, rank-1 identification accuracy of 99.0% (closed set), and FPIR=0.01 @ FNIR=0.3 for open-set identification using PalmMobile SDK from Armatura.
translated by 谷歌翻译
横梁面部识别(CFR)旨在识别个体,其中比较面部图像源自不同的感测模式,例如红外与可见的。虽然CFR由于与模态差距相关的面部外观的显着变化,但CFR具有比经典的面部识别更具挑战性,但它在具有有限或挑战的照明的场景中,以及在呈现攻击的情况下,它是优越的。与卷积神经网络(CNNS)相关的人工智能最近的进展使CFR的显着性能提高了。由此激励,这项调查的贡献是三倍。我们提供CFR的概述,目标是通过首先正式化CFR然后呈现具体相关的应用来比较不同光谱中捕获的面部图像。其次,我们探索合适的谱带进行识别和讨论最近的CFR方法,重点放在神经网络上。特别是,我们提出了提取和比较异构特征以及数据集的重新访问技术。我们枚举不同光谱和相关算法的优势和局限性。最后,我们讨论了研究挑战和未来的研究线。
translated by 谷歌翻译
Context-aware decision support in the operating room can foster surgical safety and efficiency by leveraging real-time feedback from surgical workflow analysis. Most existing works recognize surgical activities at a coarse-grained level, such as phases, steps or events, leaving out fine-grained interaction details about the surgical activity; yet those are needed for more helpful AI assistance in the operating room. Recognizing surgical actions as triplets of <instrument, verb, target> combination delivers comprehensive details about the activities taking place in surgical videos. This paper presents CholecTriplet2021: an endoscopic vision challenge organized at MICCAI 2021 for the recognition of surgical action triplets in laparoscopic videos. The challenge granted private access to the large-scale CholecT50 dataset, which is annotated with action triplet information. In this paper, we present the challenge setup and assessment of the state-of-the-art deep learning methods proposed by the participants during the challenge. A total of 4 baseline methods from the challenge organizers and 19 new deep learning algorithms by competing teams are presented to recognize surgical action triplets directly from surgical videos, achieving mean average precision (mAP) ranging from 4.2% to 38.1%. This study also analyzes the significance of the results obtained by the presented approaches, performs a thorough methodological comparison between them, in-depth result analysis, and proposes a novel ensemble method for enhanced recognition. Our analysis shows that surgical workflow analysis is not yet solved, and also highlights interesting directions for future research on fine-grained surgical activity recognition which is of utmost importance for the development of AI in surgery.
translated by 谷歌翻译
2019年冠状病毒疾病(Covid-19)继续自爆发以来对世界产生巨大挑战。为了对抗这种疾病,开发了一系列人工智能(AI)技术,并应用于现实世界的情景,如安全监测,疾病诊断,感染风险评估,Covid-19 CT扫描的病变细分等。 Coronavirus流行病迫使人们佩戴面膜来抵消病毒的传播,这也带来了监控戴着面具的大群人群的困难。在本文中,我们主要关注蒙面面部检测和相关数据集的AI技术。从蒙面面部检测数据集的描述开始,我们调查了最近的进步。详细描述并详细讨论了十三可用数据集。然后,该方法大致分为两类:传统方法和基于神经网络的方法。常规方法通常通过用手工制作的特征升高算法来训练,该算法占少比例。基于神经网络的方法根据处理阶段的数量进一步归类为三个部分。详细描述了代表性算法,与一些简要描述的一些典型技术耦合。最后,我们总结了最近的基准测试结果,讨论了关于数据集和方法的局限性,并扩大了未来的研究方向。据我们所知,这是关于蒙面面部检测方法和数据集的第一次调查。希望我们的调查可以提供一些帮助对抗流行病的帮助。
translated by 谷歌翻译
面部检测是为了在图像中搜索面部的所有可能区域,并且如果有任何情况,则定位面部。包括面部识别,面部表情识别,面部跟踪和头部姿势估计的许多应用假设面部的位置和尺寸在图像中是已知的。近几十年来,研究人员从Viola-Jones脸上检测器创造了许多典型和有效的面部探测器到当前的基于CNN的CNN。然而,随着图像和视频的巨大增加,具有面部刻度的变化,外观,表达,遮挡和姿势,传统的面部探测器被挑战来检测野外面孔的各种“脸部。深度学习技术的出现带来了非凡的检测突破,以及计算的价格相当大的价格。本文介绍了代表性的深度学习的方法,并在准确性和效率方面提出了深度和全面的分析。我们进一步比较并讨论了流行的并挑战数据集及其评估指标。进行了几种成功的基于深度学习的面部探测器的全面比较,以使用两个度量来揭示其效率:拖鞋和延迟。本文可以指导为不同应用选择合适的面部探测器,也可以开发更高效和准确的探测器。
translated by 谷歌翻译
Recent years witnessed the breakthrough of face recognition with deep convolutional neural networks. Dozens of papers in the field of FR are published every year. Some of them were applied in the industrial community and played an important role in human life such as device unlock, mobile payment, and so on. This paper provides an introduction to face recognition, including its history, pipeline, algorithms based on conventional manually designed features or deep learning, mainstream training, evaluation datasets, and related applications. We have analyzed and compared state-of-the-art works as many as possible, and also carefully designed a set of experiments to find the effect of backbone size and data distribution. This survey is a material of the tutorial named The Practical Face Recognition Technology in the Industrial World in the FG2023.
translated by 谷歌翻译
The International Workshop on Reading Music Systems (WoRMS) is a workshop that tries to connect researchers who develop systems for reading music, such as in the field of Optical Music Recognition, with other researchers and practitioners that could benefit from such systems, like librarians or musicologists. The relevant topics of interest for the workshop include, but are not limited to: Music reading systems; Optical music recognition; Datasets and performance evaluation; Image processing on music scores; Writer identification; Authoring, editing, storing and presentation systems for music scores; Multi-modal systems; Novel input-methods for music to produce written music; Web-based Music Information Retrieval services; Applications and projects; Use-cases related to written music. These are the proceedings of the 3rd International Workshop on Reading Music Systems, held in Alicante on the 23rd of July 2021.
translated by 谷歌翻译
使用面部作为生物识别标识特征是通过捕获过程的非接触性质和识别算法的高准确度的激励。在目前的Covid-19大流行之后,在公共场所施加了面膜,以保持大流行。然而,由于戴着面具而面的遮挡是面部识别系统的新出现挑战。在本文中,我们提出了一种改进掩蔽面部识别性能的解决方案。具体地,我们提出了在现有面部识别模型的顶部操作的嵌入揭露模型(EUM)。我们还提出了一种新颖的损失功能,自限制的三态(SRT),使欧莱斯能够产生类似于相同身份的未掩蔽面的嵌入物。实现了三个面部识别模型,两个真实屏蔽数据集和两个合成产生的掩蔽面部数据集所取得的评价结果​​证明我们的提出方法在大多数实验环境中显着提高了性能。
translated by 谷歌翻译
Recent face recognition experiments on a major benchmark (LFW [14]) show stunning performance-a number of algorithms achieve near to perfect score, surpassing human recognition rates. In this paper, we advocate evaluations at the million scale (LFW includes only 13K photos of 5K people). To this end, we have assembled the MegaFace dataset and created the first MegaFace challenge. Our dataset includes One Million photos that capture more than 690K different individuals. The challenge evaluates performance of algorithms with increasing numbers of "distractors" (going from 10 to 1M) in the gallery set. We present both identification and verification performance, evaluate performance with respect to pose and a persons age, and compare as a function of training data size (#photos and #people). We report results of state of the art and baseline algorithms. The MegaFace dataset, baseline code, and evaluation scripts, are all publicly released for further experimentations 1 .
translated by 谷歌翻译
Most face databases have been created under controlled conditions to facilitate the study of specific parameters on the face recognition problem. These parameters include such variables as position, pose, lighting, background, camera quality, and gender. While there are many applications for face recognition technology in which one can control the parameters of image acquisition, there are also many applications in which the practitioner has little or no control over such parameters. This database, Labeled Faces in the Wild, is provided as an aid in studying the latter, unconstrained, recognition problem. The database contains labeled face photographs spanning the range of conditions typically encountered in everyday life. The database exhibits "natural" variability in factors such as pose, lighting, race, accessories, occlusions, and background. In addition to describing the details of the database, we provide specific experimental paradigms for which the database is suitable. This is done in an effort to make research performed with the database as consistent and comparable as possible. We provide baseline results, including results of a state of the art face recognition system combined with a face alignment system. To facilitate experimentation on the database, we provide several parallel databases, including an aligned version.
translated by 谷歌翻译
The ImageNet Large Scale Visual Recognition Challenge is a benchmark in object category classification and detection on hundreds of object categories and millions of images. The challenge has been run annually from 2010 to present, attracting participation from more than fifty institutions. This paper describes the creation of this benchmark dataset and the advances in object recognition that have been possible as a result. We discuss the chal-
translated by 谷歌翻译