We apply computer vision pose estimation techniques developed expressly for the data-scarce infant domain to the study of torticollis, a common condition in infants for which early identification and treatment is critical. Specifically, we use a combination of facial landmark and body joint estimation techniques designed for infants to estimate a range of geometric measures pertaining to face and upper body symmetry, drawn from an array of sources in the physical therapy and ophthalmology research literature in torticollis. We gauge performance with a range of metrics and show that the estimates of most these geometric measures are successful, yielding strong to very strong Spearman's $\rho$ correlation with ground truth values. Furthermore, we show that these estimates, derived from pose estimation neural networks designed for the infant domain, cleanly outperform estimates derived from more widely known networks designed for the adult domain
translated by 谷歌翻译
双边姿势对称性是自闭症谱系障碍(ASD)的潜在风险标志物以及婴儿中先天性肌肉核核糖(CMT)的症状的关键作用,但是当前评估对称性的方法需要费力的临床专家评估。在本文中,我们开发了一个基于计算机视觉的婴儿对称评估系统,利用婴儿的3D人姿势估计。通过对人类角度和对称性评级的调查,我们的发现对我们的系统进行评估和校准,使这种评级表现出较低的评价者可靠性。为了纠正这一点,我们开发了一个贝叶斯的估计量,该估计量是从可犯错的人类评估者的概率图形模型中得出的。我们显示,在预测贝叶斯骨料标签方面,3D婴儿姿势估计模型可以在接收器工作特征曲线性能下实现68%的面积,而2D婴儿姿势估计模型仅为61%,而3D成人姿势估计模型的61%和60% ,强调了3D姿势和婴儿领域知识在评估婴儿身体对称性方面的重要性。我们的调查分析还表明,人类评分易受较高的偏见和不一致性的影响,因此,我们的最终基于3D姿势的对称评估系统是校准的,但没有直接受到贝叶斯汇总人类评分的直接监督,从而产生了更高的一致性和较低水平的水平和​​较低的水平。 LIMB间评估偏见。
translated by 谷歌翻译
The emergence of COVID-19 has had a global and profound impact, not only on society as a whole, but also on the lives of individuals. Various prevention measures were introduced around the world to limit the transmission of the disease, including face masks, mandates for social distancing and regular disinfection in public spaces, and the use of screening applications. These developments also triggered the need for novel and improved computer vision techniques capable of (i) providing support to the prevention measures through an automated analysis of visual data, on the one hand, and (ii) facilitating normal operation of existing vision-based services, such as biometric authentication schemes, on the other. Especially important here, are computer vision techniques that focus on the analysis of people and faces in visual data and have been affected the most by the partial occlusions introduced by the mandates for facial masks. Such computer vision based human analysis techniques include face and face-mask detection approaches, face recognition techniques, crowd counting solutions, age and expression estimation procedures, models for detecting face-hand interactions and many others, and have seen considerable attention over recent years. The goal of this survey is to provide an introduction to the problems induced by COVID-19 into such research and to present a comprehensive review of the work done in the computer vision based human analysis field. Particular attention is paid to the impact of facial masks on the performance of various methods and recent solutions to mitigate this problem. Additionally, a detailed review of existing datasets useful for the development and evaluation of methods for COVID-19 related applications is also provided. Finally, to help advance the field further, a discussion on the main open challenges and future research direction is given.
translated by 谷歌翻译
Studying facial expressions is a notoriously difficult endeavor. Recent advances in the field of affective computing have yielded impressive progress in automatically detecting facial expressions from pictures and videos. However, much of this work has yet to be widely disseminated in social science domains such as psychology. Current state of the art models require considerable domain expertise that is not traditionally incorporated into social science training programs. Furthermore, there is a notable absence of user-friendly and open-source software that provides a comprehensive set of tools and functions that support facial expression research. In this paper, we introduce Py-Feat, an open-source Python toolbox that provides support for detecting, preprocessing, analyzing, and visualizing facial expression data. Py-Feat makes it easy for domain experts to disseminate and benchmark computer vision models and also for end users to quickly process, analyze, and visualize face expression data. We hope this platform will facilitate increased use of facial expression data in human behavior research.
translated by 谷歌翻译
Quantitative cephalometric analysis is the most widely used clinical and research tool in modern orthodontics. Accurate localization of cephalometric landmarks enables the quantification and classification of anatomical abnormalities, however, the traditional manual way of marking these landmarks is a very tedious job. Endeavours have constantly been made to develop automated cephalometric landmark detection systems but they are inadequate for orthodontic applications. The fundamental reason for this is that the amount of publicly available datasets as well as the images provided for training in these datasets are insufficient for an AI model to perform well. To facilitate the development of robust AI solutions for morphometric analysis, we organise the CEPHA29 Automatic Cephalometric Landmark Detection Challenge in conjunction with IEEE International Symposium on Biomedical Imaging (ISBI 2023). In this context, we provide the largest known publicly available dataset, consisting of 1000 cephalometric X-ray images. We hope that our challenge will not only derive forward research and innovation in automatic cephalometric landmark identification but will also signal the beginning of a new era in the discipline.
translated by 谷歌翻译
针灸是一种技术,从业者刺激身体上的​​特定点。这些点,称为针灸点(或穴位),解剖学上限定皮肤上的区域相对于身体上的一些地标。传统针灸治疗依靠经验丰富的针灸师进行精确定位穴位。由于缺乏视觉线索,新手通常会发现它很难。该项目提供了Faceatlasar,一个原型系统,在增强现实(AR)上下文中定位和可视化面部穴位。该系统旨在以解剖学但可行的方式定位面部穴位和耳廓区域图,2)通过AR中的类别覆盖所要求的穴位,3)在耳朵上显示檐耳区图。我们采用MediaPipe,一个跨平台机器学习框架,构建在桌面和Android手机上运行的管道。我们在不同的基准上执行实验,包括“野外”,AMI EAR数据集和我们自己的注释数据集。结果显示面部穴位的定位精度为95%,99%/ 97%(“野生”/ ami)用于耳廓区域地图和高稳健性。通过该系统,用户甚至不是专业人士,可以快速定位穴位以获得自我压缩处理。
translated by 谷歌翻译
Fine-grained semantic segmentation of a person's face and head, including facial parts and head components, has progressed a great deal in recent years. However, it remains a challenging task, whereby considering ambiguous occlusions and large pose variations are particularly difficult. To overcome these difficulties, we propose a novel framework termed Mask-FPAN. It uses a de-occlusion module that learns to parse occluded faces in a semi-supervised way. In particular, face landmark localization, face occlusionstimations, and detected head poses are taken into account. A 3D morphable face model combined with the UV GAN improves the robustness of 2D face parsing. In addition, we introduce two new datasets named FaceOccMask-HQ and CelebAMaskOcc-HQ for face paring work. The proposed Mask-FPAN framework addresses the face parsing problem in the wild and shows significant performance improvements with MIOU from 0.7353 to 0.9013 compared to the state-of-the-art on challenging face datasets.
translated by 谷歌翻译
对任何自闭症谱系疾病的筛选是一种复杂的过程,通常涉及行为观察和基于问卷的测试的杂交。通常在受控环境中进行,此过程需要培训的临床医生或精神科医生进行此类评估。在移动平台上的技术进步浪潮中,已经在纳入移动和平板电脑设备上的这种评估时进行了多次尝试。在本文中,我们分析了使用这种筛选测试产生的视频。本文报道了使用观察者与显示屏距离的效果的第一次使用,同时向2-7岁的儿童作为自闭症的行为标记进行感官敏感性测试,在休闲家庭设置中使用如此的潜力很有希望。
translated by 谷歌翻译
从人们到3D面部模型的面部表情转移是一种经典的计算机图形问题。在本文中,我们提出了一种基于学习的基于学习的方法,将来自图像和视频从图像和视频转移到面部头颈络合物的生物力学模型。利用面部动作编码系统(FACS)作为表达空间的中间表示,我们训练深度神经网络,采用FACS动作单元(AUS),并为肌肉骨骼模型输出合适的面部肌肉和钳口激活信号。通过生物力学模拟,激活变形了面部软组织,从而将表达转移到模型。我们的方法具有比以前的方法相比。首先,面部表情是剖贯的一致,因为我们的生物力学模型模拟了面部,头部和颈部的相关解剖结构。其次,通过使用从生物力学模型本身产生的数据训练神经网络,我们消除了数据收集的表达式转移的手动努力。通过涉及转移到面部表情和头部姿势的实验,通过实验证明了我们的方法的成功。
translated by 谷歌翻译
深度神经网络在人类分析中已经普遍存在,增强了应用的性能,例如生物识别识别,动作识别以及人重新识别。但是,此类网络的性能通过可用的培训数据缩放。在人类分析中,对大规模数据集的需求构成了严重的挑战,因为数据收集乏味,廉价,昂贵,并且必须遵守数据保护法。当前的研究研究了\ textit {合成数据}的生成,作为在现场收集真实数据的有效且具有隐私性的替代方案。这项调查介绍了基本定义和方法,在生成和采用合成数据进行人类分析时必不可少。我们进行了一项调查,总结了当前的最新方法以及使用合成数据的主要好处。我们还提供了公开可用的合成数据集和生成模型的概述。最后,我们讨论了该领域的局限性以及开放研究问题。这项调查旨在为人类分析领域的研究人员和从业人员提供。
translated by 谷歌翻译
3D gaze estimation is most often tackled as learning a direct mapping between input images and the gaze vector or its spherical coordinates. Recently, it has been shown that pose estimation of the face, body and hands benefits from revising the learning target from few pose parameters to dense 3D coordinates. In this work, we leverage this observation and propose to tackle 3D gaze estimation as regression of 3D eye meshes. We overcome the absence of compatible ground truth by fitting a rigid 3D eyeball template on existing gaze datasets and propose to improve generalization by making use of widely available in-the-wild face images. To this end, we propose an automatic pipeline to retrieve robust gaze pseudo-labels from arbitrary face images and design a multi-view supervision framework to balance their effect during training. In our experiments, our method achieves improvement of 30% compared to state-of-the-art in cross-dataset gaze estimation, when no ground truth data are available for training, and 7% when they are. We make our project publicly available at https://github.com/Vagver/dense3Deyes.
translated by 谷歌翻译
我们研究了精神病学临床领域中脑唤醒的调节改变了面部行为的统计特性。潜在的机制与对某些心理状态的行为替代测量的警惕性连续体的经验解释有关。我们以基于经典的头皮的审视传感器(OEG)的意义命名了所提出的测量,该传感器光电脑摄影(OEG)仅依赖于现代基于摄像机的实时信号处理和计算机视觉。基于随机表示作为面部动力学的连贯性,反映了情绪表达中的半径不对称性,我们证明了患者与健康对照之间几乎没有完美的区别,以及精神疾病抑郁症和精神分裂症和症状的严重性。与标准诊断过程相反,该过程耗时,主观,不包含神经生物学数据,例如实时面部动力学,情感响应能力的客观随机建模仅需要几分钟的基于视频的面部录制。我们还强调了该方法作为因果推断模型在转诊分析中的潜力,以预测药理治疗的结果。所有结果均在临床纵向数据收集中获得,其中有100名患者和50例对照。
translated by 谷歌翻译
我们介绍了Daisee,这是第一个多标签视频分类数据集,该数据集由112个用户捕获的9068个视频片段,用于识别野外无聊,混乱,参与度和挫败感的用户情感状态。该数据集具有四个级别的标签 - 每个情感状态都非常低,低,高和很高,它们是人群注释并与使用专家心理学家团队创建的黄金标准注释相关的。我们还使用当今可用的最先进的视频分类方法在此数据集上建立了基准结果。我们认为,黛西(Daisee)将为研究社区提供特征提取,基于上下文的推理以及为相关任务开发合适的机器学习方法的挑战,从而为进一步的研究提供了跳板。该数据集可在https://people.iith.ac.in/vineethnb/resources/daisee/daisee/index.html下载。
translated by 谷歌翻译
我们提出了一种新的面部锚和轮廓估计框架,ACE-Net,用于细级面向对准任务。 ACE-NET预测面部锚和轮廓比传统的面部地标更丰富,同时克服了他们的定义中的含糊不清和不一致。我们介绍了一个弱监督的损失,使ACE-Net能够从现有的面部地标数据集中学习,而无需进口。相反,在训练期间使用从该合成数据,从该合成数据可以容易地获得GT轮廓,以弥合地标和真正的面部轮廓之间的密度差距。我们对Helen DataSet的ACE-Net的面对对准精度进行了评估,其中具有194个注释的面部地标,而且它仅培训了来自300 W数据集的68或36个地标。我们表明ACE-Net生成的轮廓优于直接来自68 GT地标和ACE-NET的轮廓更优于从GT地标的轮廓的完全监督培训的型号。
translated by 谷歌翻译
面部影响的成像可用于通过成年后的儿童进行心理生理属性,特别是用于监测自闭症谱系障碍等终身疾病。深度卷积神经网络在对成年人的面部表情进行分类方面表现出了令人鼓舞的结果。但是,经过成人基准数据培训的分类器模型由于心理物理发展的差异而不适合学习儿童表情。同样,接受儿童数据训练的模型在成人表达分类中的表现较差。我们建议适应域,以同时对齐成人和儿童表达式在共享潜在空间中的分布,以确保对任何一个领域的稳健分类。此外,在成年子女表达分类中研究了面部图像的年龄变化,但仍无法掌握。我们从多个领域中汲取灵感,并提出深层自适应面部表情,以融合betamix选定的地标特征(面部自我),以进行成人的面部表情分类。在文献中,基于与表达,域和身份因素的相关性,beta分布的混合物首次用于分解和选择面部特征。我们通过两对成人孩子数据集评估面对面的自我。我们提出的面对面的方法在对齐成人和儿童表情的潜在表示方面优于成人孩子转移学习和其他基线适应方法。
translated by 谷歌翻译
Because of their close relationship with humans, non-human apes (chimpanzees, bonobos, gorillas, orangutans, and gibbons, including siamangs) are of great scientific interest. The goal of understanding their complex behavior would be greatly advanced by the ability to perform video-based pose tracking. Tracking, however, requires high-quality annotated datasets of ape photographs. Here we present OpenApePose, a new public dataset of 71,868 photographs, annotated with 16 body landmarks, of six ape species in naturalistic contexts. We show that a standard deep net (HRNet-W48) trained on ape photos can reliably track out-of-sample ape photos better than networks trained on monkeys (specifically, the OpenMonkeyPose dataset) and on humans (COCO) can. This trained network can track apes almost as well as the other networks can track their respective taxa, and models trained without one of the six ape species can track the held out species better than the monkey and human models can. Ultimately, the results of our analyses highlight the importance of large specialized databases for animal tracking systems and confirm the utility of our new ape database.
translated by 谷歌翻译
婴儿运动分析是在儿童早期开发研究中具有重要意义的主题。然而,虽然人类姿势估计的应用变得越来越宽,但是在大规模成年姿势数据集上培训的模型几乎不能在估计婴幼儿姿势,因为它们的身体比率显着差异以及它们的构成的多功能性。此外,隐私和安全考虑因素阻碍了从头划痕培训强大模型所需的适当婴儿姿势数据的可用性。为了解决这个问题,本文提出(1)建立和公开发布具有小但不同实际婴儿图像的混合综合和真正的婴儿姿势(Syrip)数据集以及生成的合成婴儿姿势和(2)多级不变表示学习策略可以将知识从成人姿势和合成婴儿图像的相邻域和综合性婴儿图像转移到我们的微调域适应婴儿姿势(FIDEP)估计模型中。在我们的消融研究中,具有相同的网络结构,在SyRip数据集上培训的模型对唯一的其他公共婴儿姿势数据集接受过的培训明显改进。与具有不同复杂性的姿势估计骨干网络集成,FIDEP比这些模型的微调版本始终如一。我们最先进的暗影模型上最好的婴儿姿势估计表演者显示了93.6的平均平均精度(MAP)。
translated by 谷歌翻译
目前全面监督的面部地标检测方法迅速进行,实现了显着性能。然而,当在大型姿势和重闭合的面孔和重闭合时仍然遭受痛苦,以进行不准确的面部形状约束,并且标记的训练样本不足。在本文中,我们提出了一个半监督框架,即自我校准的姿势注意网络(SCPAN),以实现更具挑战性的情景中的更强大和精确的面部地标检测。具体地,建议通过定影边界和地标强度场信息来模拟更有效的面部形状约束的边界意识的地标强度(BALI)字段。此外,设计了一种自我校准的姿势注意力(SCPA)模型,用于提供自学习的目标函数,该功能通过引入自校准机制和姿势注意掩模而无需标签信息而无需标签信息。我们认为,通过将巴厘岛领域和SCPA模型集成到新颖的自我校准的姿势网络中,可以了解更多的面部现有知识,并且我们的面孔方法的检测精度和稳健性得到了改善。获得具有挑战性的基准数据集获得的实验结果表明,我们的方法优于文献中最先进的方法。
translated by 谷歌翻译
在基于视觉的辅助技术中,具有不同新兴主题的用例,例如增强现实,虚拟现实和人类计算机互动等不同的主题中的用例中,自动眼目光估计是一个重要问题。在过去的几年中,由于它克服了大规模注释的数据的要求,因此人们对无监督和自我监督的学习范式的兴趣越来越大。在本文中,我们提出了Raze,Raze是一个带有自我监督的注视表示框架的区域,该框架从非宣传的面部图像数据中发挥作用。 Raze通过辅助监督(即伪凝视区域分类)学习目光的表示,其中目的是通过利用瞳孔中心的相对位置将视野分类为不同的凝视区域(即左,右和中心)。因此,我们会自动注释154K Web爬行图像的伪凝视区标签,并通过“ IZE-NET”框架学习特征表示。 “ IZE-NET”是基于胶囊层的CNN体​​系结构,可以有效地捕获丰富的眼睛表示。在四个基准数据集上评估了特征表示的判别性能:洞穴,桌面,MPII和RT-GENE。此外,我们评估了所提出的网络在其他两个下游任务(即驱动器凝视估计和视觉注意估计)上的普遍性,这证明了学习的眼睛注视表示的有效性。
translated by 谷歌翻译
开发旨在增强胎儿监测的创新信息学方法是生殖医学研究的新领域。已经对人工智能(AI)技术进行了几项评论,以改善妊娠结局。他们的限制是专注于特定数据,例如怀孕期间母亲的护理。这项系统的调查旨在探讨人工智能(AI)如何通过超声(US)图像帮助胎儿生长监测。我们使用了八个医学和计算机科学书目数据库,包括PubMed,Embase,Psycinfo,ScienceDirect,IEEE Explore,ACM图书馆,Google Scholar和Web of Science。我们检索了2010年至2021年之间发表的研究。从研究中提取的数据是使用叙述方法合成的。在1269项检索研究中,我们包括了107项与调查中有关该主题的查询的不同研究。我们发现,与3D和4D超声图像(n = 19)相比,2D超声图像更受欢迎(n = 88)。分类是最常用的方法(n = 42),其次是分割(n = 31),与分割(n = 16)集成的分类和其他其他杂项,例如对象检测,回归和增强学习(n = 18)。妊娠结构域中最常见的区域是胎儿头(n = 43),然后是胎儿(n = 31),胎儿心脏(n = 13),胎儿腹部(n = 10),最后是胎儿的面孔(n = 10)。在最近的研究中,深度学习技术主要使用(n = 81),其次是机器学习(n = 16),人工神经网络(n = 7)和增强学习(n = 2)。 AI技术在预测胎儿疾病和鉴定怀孕期间胎儿解剖结构中起着至关重要的作用。需要进行更多的研究来从医生的角度验证这项技术,例如试点研究和有关AI及其在医院环境中的应用的随机对照试验。
translated by 谷歌翻译