人体/手的姿势估计是计算机愿景中的根本问题,基于学习的解决方案需要大量的注释数据。给定有限的注释预算,增加标签效率的常见方法是活动学习(AL),其选择具有最高值的举例,但选择选择策略通常是不变的。在这项工作中,我们改进了在多视图设置中的3D姿态估计问题的主动学习,这在许多应用场景中的重要性越来越重要。我们开发一个框架,使我们能够有效地扩展现有的单视角策略,然后提出两种新的AL策略,可以充分利用多视图几何形状。此外,我们通过纳入预测的伪标签来证明额外的性能提升,这是一种自我训练的形式。我们的系统在三个大型基准测试中显着优于3D身体和手势估计中的基线:CMU Panoptic Studio和Interwand2.6m。值得注意的是,在CMU Panoptic Studio上,我们能够使用仅使用20%的标记培训数据来匹配全监督模型的性能。
translated by 谷歌翻译
We propose LiDAL, a novel active learning method for 3D LiDAR semantic segmentation by exploiting inter-frame uncertainty among LiDAR frames. Our core idea is that a well-trained model should generate robust results irrespective of viewpoints for scene scanning and thus the inconsistencies in model predictions across frames provide a very reliable measure of uncertainty for active sample selection. To implement this uncertainty measure, we introduce new inter-frame divergence and entropy formulations, which serve as the metrics for active selection. Moreover, we demonstrate additional performance gains by predicting and incorporating pseudo-labels, which are also selected using the proposed inter-frame uncertainty measure. Experimental results validate the effectiveness of LiDAL: we achieve 95% of the performance of fully supervised learning with less than 5% of annotations on the SemanticKITTI and nuScenes datasets, outperforming state-of-the-art active learning methods. Code release: https://github.com/hzykent/LiDAL.
translated by 谷歌翻译
As an important data selection schema, active learning emerges as the essential component when iterating an Artificial Intelligence (AI) model. It becomes even more critical given the dominance of deep neural network based models, which are composed of a large number of parameters and data hungry, in application. Despite its indispensable role for developing AI models, research on active learning is not as intensive as other research directions. In this paper, we present a review of active learning through deep active learning approaches from the following perspectives: 1) technical advancements in active learning, 2) applications of active learning in computer vision, 3) industrial systems leveraging or with potential to leverage active learning for data iteration, 4) current limitations and future research directions. We expect this paper to clarify the significance of active learning in a modern AI model manufacturing process and to bring additional research attention to active learning. By addressing data automation challenges and coping with automated machine learning systems, active learning will facilitate democratization of AI technologies by boosting model production at scale.
translated by 谷歌翻译
当标记的数据丰富时,从单个图像中进行3D姿势估计的监督方法非常有效。但是,由于对地面3D标签的获取是劳动密集型且耗时的,最近的关注已转向半决赛和弱监督的学习。产生有效的监督形式,几乎没有注释,仍然在拥挤的场景中构成重大挑战。在本文中,我们建议通过加权区分三角剖分施加多视文几何约束,并在没有标签时将其用作一种自我设计的形式。因此,我们以一种方式训练2D姿势估计器,以使其预测对应于对三角姿势的3D姿势的重新投影,并在其上训练辅助网络以产生最终的3D姿势。我们通过一种加权机制来补充三角剖分,从而减轻了由自我咬合或其他受试者的遮挡引起的嘈杂预测的影响。我们证明了半监督方法对人类36M和MPI-INF-3DHP数据集的有效性,以及在具有闭塞的新的多视频多人数据集上。
translated by 谷歌翻译
虽然姿势估计是一项重要的计算机视觉任务,但它需要昂贵的注释,并且遭受了域转移的困扰。在本文中,我们调查了域自适应2D姿势估计的问题,这些估计会传输有关合成源域的知识,而无需监督。尽管最近已经提出了几个领域的自适应姿势估计模型,但它们不是通用的,而是专注于人姿势或动物姿势估计,因此它们的有效性在某种程度上限于特定情况。在这项工作中,我们提出了一个统一的框架,该框架可以很好地推广到各种领域自适应姿势估计问题上。我们建议使用输入级别和输出级线索(分别是像素和姿势标签)对齐表示,这有助于知识转移从源域到未标记的目标域。我们的实验表明,我们的方法在各个领域变化下实现了最先进的性能。我们的方法的表现优于现有的姿势估计基线,最高4.5%(PP),手部姿势估算高达7.4 pp,狗的动物姿势估计高达4.8 pp,而绵羊的姿势估计为3.3 pp。这些结果表明,我们的方法能够减轻各种任务甚至看不见的域和物体的转移(例如,在马匹上训练并在狗上进行了测试)。我们的代码将在以下网址公开可用:https://github.com/visionlearninggroup/uda_poseestimation。
translated by 谷歌翻译
本文调查了2D全身人类姿势估计的任务,该任务旨在将整个人体(包括身体,脚,脸部和手)局部定位在整个人体上。我们提出了一种称为Zoomnet的单网络方法,以考虑到完整人体的层次结构,并解决不同身体部位的规模变化。我们进一步提出了一个称为Zoomnas的神经体系结构搜索框架,以促进全身姿势估计的准确性和效率。Zoomnas共同搜索模型体系结构和不同子模块之间的连接,并自动为搜索的子模块分配计算复杂性。为了训练和评估Zoomnas,我们介绍了第一个大型2D人类全身数据集,即可可叶全体V1.0,它注释了133个用于野外图像的关键点。广泛的实验证明了Zoomnas的有效性和可可叶v1.0的重要性。
translated by 谷歌翻译
我们介绍了TemPCLR,这是一种针对3D手重建的结构化回归任务的新的时代对比学习方法。与以前的手部姿势估计方法相抵触方法不同,我们的框架考虑了其增强方案中的时间一致性,并说明了沿时间方向的手部姿势的差异。我们的数据驱动方法利用了未标记的视频和标准CNN,而无需依赖合成数据,伪标签或专业体系结构。我们的方法在HO-3D和Freihand数据集中分别将全面监督的手部重建方法的性能提高了15.9%和7.6%,从而确立了新的最先进的性能。最后,我们证明了我们的方法会随着时间的推移产生更平滑的手部重建,并且与以前的最新作品相比,对重型的闭塞更为强大,我们在定量和定性上表现出来。我们的代码和模型将在https://eth-ait.github.io/tempclr上找到。
translated by 谷歌翻译
Its numerous applications make multi-human 3D pose estimation a remarkably impactful area of research. Nevertheless, assuming a multiple-view system composed of several regular RGB cameras, 3D multi-pose estimation presents several challenges. First of all, each person must be uniquely identified in the different views to separate the 2D information provided by the cameras. Secondly, the 3D pose estimation process from the multi-view 2D information of each person must be robust against noise and potential occlusions in the scenario. In this work, we address these two challenges with the help of deep learning. Specifically, we present a model based on Graph Neural Networks capable of predicting the cross-view correspondence of the people in the scenario along with a Multilayer Perceptron that takes the 2D points to yield the 3D poses of each person. These two models are trained in a self-supervised manner, thus avoiding the need for large datasets with 3D annotations.
translated by 谷歌翻译
Accurate whole-body multi-person pose estimation and tracking is an important yet challenging topic in computer vision. To capture the subtle actions of humans for complex behavior analysis, whole-body pose estimation including the face, body, hand and foot is essential over conventional body-only pose estimation. In this paper, we present AlphaPose, a system that can perform accurate whole-body pose estimation and tracking jointly while running in realtime. To this end, we propose several new techniques: Symmetric Integral Keypoint Regression (SIKR) for fast and fine localization, Parametric Pose Non-Maximum-Suppression (P-NMS) for eliminating redundant human detections and Pose Aware Identity Embedding for jointly pose estimation and tracking. During training, we resort to Part-Guided Proposal Generator (PGPG) and multi-domain knowledge distillation to further improve the accuracy. Our method is able to localize whole-body keypoints accurately and tracks humans simultaneously given inaccurate bounding boxes and redundant detections. We show a significant improvement over current state-of-the-art methods in both speed and accuracy on COCO-wholebody, COCO, PoseTrack, and our proposed Halpe-FullBody pose estimation dataset. Our model, source codes and dataset are made publicly available at https://github.com/MVIG-SJTU/AlphaPose.
translated by 谷歌翻译
3D gaze estimation is most often tackled as learning a direct mapping between input images and the gaze vector or its spherical coordinates. Recently, it has been shown that pose estimation of the face, body and hands benefits from revising the learning target from few pose parameters to dense 3D coordinates. In this work, we leverage this observation and propose to tackle 3D gaze estimation as regression of 3D eye meshes. We overcome the absence of compatible ground truth by fitting a rigid 3D eyeball template on existing gaze datasets and propose to improve generalization by making use of widely available in-the-wild face images. To this end, we propose an automatic pipeline to retrieve robust gaze pseudo-labels from arbitrary face images and design a multi-view supervision framework to balance their effect during training. In our experiments, our method achieves improvement of 30% compared to state-of-the-art in cross-dataset gaze estimation, when no ground truth data are available for training, and 7% when they are. We make our project publicly available at https://github.com/Vagver/dense3Deyes.
translated by 谷歌翻译
对人类姿势和行动的认可对于自治系统与人们顺利互动。然而,相机通常在2D中捕获人类的姿势,作为图像和视频,这在跨越识别任务具有挑战性的观点来具有显着的外观变化。为了解决这个问题,我们探讨了来自2D信息的3D人体姿势中的识别相似性,在现有工作中没有得到很好地研究。在这里,我们提出了一种从2D主体关节键盘学习紧凑型视图 - 不变的嵌入空间的方法,而不明确地预测3D姿势。通过确定性映射难以代表预测和遮挡的2D姿势的输入模糊,因此我们采用了嵌入空间的概率制定。实验结果表明,与3D姿态估计模型相比,我们的嵌入模型在不同相机视图中检索类似的姿势时达到更高的准确性。我们还表明,通过培训简单的时间嵌入模型,我们在姿势序列检索方面取得了卓越的性能,并大大减少了基于堆叠帧的嵌入式的嵌入维度,以实现高效的大规模检索。此外,为了使我们的嵌入能够使用部分可见的输入,我们进一步调查培训期间的不同关键点遮挡增强策略。我们证明这些遮挡增强显着提高了部分2D输入姿势的检索性能。行动识别和视频对齐的结果表明,使用我们的嵌入没有任何额外培训,可以实现相对于每个任务专门培训的其他模型的竞争性能。
translated by 谷歌翻译
在深度学习的时代,具有未知校准未知校准的多个摄像机的人类姿态估计几乎没有关注迄今为止。我们展示如何培训一个神经模型,以高精度和最小延迟开销来执行此任务。由于多视图闭塞,所提出的模型考虑了联合位置不确定性,并且只需要2D关键点数据进行培训。我们的方法优于良好的人机3.6M数据集上的经典捆绑调整和弱监督单眼3D基线,以及野外滑雪姿势PTZ数据集的更具挑战性。
translated by 谷歌翻译
事件摄像头是一种新兴的生物启发的视觉传感器,每像素亮度不同步地变化。它具有高动态范围,高速响应和低功率预算的明显优势,使其能够在不受控制的环境中最好地捕获本地动作。这激发了我们释放事件摄像机进行人姿势估计的潜力,因为很少探索人类姿势估计。但是,由于新型范式从传统的基于框架的摄像机转变,时间间隔中的事件信号包含非常有限的信息,因为事件摄像机只能捕获移动的身体部位并忽略那些静态的身体部位,从而导致某些部位不完整甚至在时间间隔中消失。本文提出了一种新型的密集连接的复发架构,以解决不完整信息的问题。通过这种经常性的体系结构,我们可以明确地对跨时间步骤的顺序几何一致性进行明确模拟,从而从以前的帧中积累信息以恢复整个人体,从而从事件数据中获得稳定且准确的人类姿势估计。此外,为了更好地评估我们的模型,我们收集了一个基于人类姿势注释的大型多模式事件数据集,该数据集是迄今为止我们所知的最具挑战性的数据集。两个公共数据集和我们自己的数据集的实验结果证明了我们方法的有效性和强度。代码可以在线提供,以促进未来的研究。
translated by 谷歌翻译
深度神经网络对物体检测达到了高精度,但它们的成功铰链大量标记数据。为了减少标签依赖性,已经提出了各种主动学习策略,通常基于探测器的置信度。但是,这些方法偏向于高性能类,并且可以导致获取的数据集不是测试集数据的代表不好。在这项工作中,我们提出了一个统一的主动学习框架,这考虑了探测器的不确定性和鲁棒性,确保网络在所有类中表现良好。此外,我们的方法利用自动标记来抑制潜在的分布漂移,同时提高模型的性能。 Pascal VOC07 ​​+ 12和MS-Coco的实验表明,我们的方法始终如一地优于各种活跃的学习方法,在地图中产生高达7.7%,或降低标记成本的82%。代码将在接受纸张时发布。
translated by 谷歌翻译
自我训练具有极大的促进域自适应语义分割,它迭代地在目标域上生成伪标签并删除网络。然而,由于现实分割数据集是高度不平衡的,因此目标伪标签通常偏置到多数类并且基本上嘈杂,导致出错和次优模型。为了解决这个问题,我们提出了一个基于区域的主动学习方法,用于在域移位下进行语义分割,旨在自动查询要标记的图像区域的小分区,同时最大化分割性能。我们的算法,通过区域杂质和预测不确定性(AL-RIPU)的主动学习,介绍了一种新的采集策略,其特征在于图像区域的空间邻接以及预测置信度。我们表明,所提出的基于地区的选择策略比基于图像或基于点的对应物更有效地使用有限预算。同时,我们在源图像上强制在像素和其最近邻居之间的局部预测一致性。此外,我们制定了负面学习损失,以提高目标领域的鉴别表现。广泛的实验表明,我们的方法只需要极少的注释几乎达到监督性能,并且大大优于最先进的方法。
translated by 谷歌翻译
The performance of deep neural networks improves with more annotated data. The problem is that the budget for annotation is limited. One solution to this is active learning, where a model asks human to annotate data that it perceived as uncertain. A variety of recent methods have been proposed to apply active learning to deep networks but most of them are either designed specific for their target tasks or computationally inefficient for large networks. In this paper, we propose a novel active learning method that is simple but task-agnostic, and works efficiently with the deep networks. We attach a small parametric module, named "loss prediction module," to a target network, and learn it to predict target losses of unlabeled inputs. Then, this module can suggest data that the target model is likely to produce a wrong prediction. This method is task-agnostic as networks are learned from a single loss regardless of target tasks. We rigorously validate our method through image classification, object detection, and human pose estimation, with the recent network architectures. The results demonstrate that our method consistently outperforms the previous methods over the tasks.
translated by 谷歌翻译
3D human whole-body pose estimation aims to localize precise 3D keypoints on the entire human body, including the face, hands, body, and feet. Due to the lack of a large-scale fully annotated 3D whole-body dataset, a common approach has been to train several deep networks separately on datasets dedicated to specific body parts, and combine them during inference. This approach suffers from complex training and inference pipelines because of the different biases in each dataset used. It also lacks a common benchmark which makes it difficult to compare different methods. To address these issues, we introduce Human3.6M 3D WholeBody (H3WB) which provides whole-body annotations for the Human3.6M dataset using the COCO Wholebody layout. H3WB is a large scale dataset with 133 whole-body keypoint annotations on 100K images, made possible by our new multi-view pipeline. Along with H3WB, we propose 3 tasks: i) 3D whole-body pose lifting from 2D complete whole-body pose, ii) 3D whole-body pose lifting from 2D incomplete whole-body pose, iii) 3D whole-body pose estimation from a single RGB image. We also report several baselines from popular methods for these tasks. The dataset is publicly available at \url{https://github.com/wholebody3d/wholebody3d}.
translated by 谷歌翻译
In this work, we demonstrate that 3D poses in video can be effectively estimated with a fully convolutional model based on dilated temporal convolutions over 2D keypoints. We also introduce back-projection, a simple and effective semi-supervised training method that leverages unlabeled video data. We start with predicted 2D keypoints for unlabeled video, then estimate 3D poses and finally back-project to the input 2D keypoints. In the supervised setting, our fully-convolutional model outperforms the previous best result from the literature by 6 mm mean per-joint position error on Human3.6M, corresponding to an error reduction of 11%, and the model also shows significant improvements on HumanEva-I. Moreover, experiments with back-projection show that it comfortably outperforms previous state-of-the-art results in semisupervised settings where labeled data is scarce. Code and models are available at https://github.com/ facebookresearch/VideoPose3D
translated by 谷歌翻译
Recently, unsupervised domain adaptation in satellite pose estimation has gained increasing attention, aiming at alleviating the annotation cost for training deep models. To this end, we propose a self-training framework based on the domain-agnostic geometrical constraints. Specifically, we train a neural network to predict the 2D keypoints of a satellite and then use PnP to estimate the pose. The poses of target samples are regarded as latent variables to formulate the task as a minimization problem. Furthermore, we leverage fine-grained segmentation to tackle the information loss issue caused by abstracting the satellite as sparse keypoints. Finally, we iteratively solve the minimization problem in two steps: pseudo-label generation and network training. Experimental results show that our method adapts well to the target domain. Moreover, our method won the 1st place on the sunlamp task of the second international Satellite Pose Estimation Competition.
translated by 谷歌翻译
与无监督培训相比,对光流预测因子的监督培训通常会产生更好的准确性。但是,改进的性能通常以较高的注释成本。半监督的培训与注释成本相比,准确性的准确性。我们使用一种简单而有效的半监督训练方法来表明,即使一小部分标签也可以通过无监督的训练来提高流量准确性。此外,我们提出了基于简单启发式方法的主动学习方法,以进一步减少实现相同目标准确性所需的标签数量。我们对合成和真实光流数据集的实验表明,我们的半监督网络通常需要大约50%的标签才能达到接近全标签的精度,而在Sintel上有效学习只有20%左右。我们还分析并展示了有关可能影响主动学习绩效的因素的见解。代码可在https://github.com/duke-vision/optical-flow-active-learning-release上找到。
translated by 谷歌翻译