We propose a deep learning approach for finding dense correspondences between3D scans of people. Our method requires only partial geometric information inthe form of two depth maps or partial reconstructed surfaces, works for humansin arbitrary poses and wearing any clothing, does not require the two people tobe scanned from similar viewpoints, and runs in real time. We use a deepconvolutional neural network to train a feature descriptor on depth map pixels,but crucially, rather than training the network to solve the shapecorrespondence problem directly, we train it to solve a body regionclassification problem, modified to increase the smoothness of the learneddescriptors near region boundaries. This approach ensures that nearby points onthe human body are nearby in feature space, and vice versa, rendering thefeature descriptor suitable for computing dense correspondences between thescans. We validate our method on real and synthetic data for both clothed andunclothed humans, and show that our correspondences are more robust than ispossible with state-of-the-art unsupervised methods, and more accurate thanthose found using methods that require full watertight 3D geometry.
translated by 谷歌翻译