This work provides an architecture to enable robotic grasp planning via shapecompletion. Shape completion is accomplished through the use of a 3Dconvolutional neural network (CNN). The network is trained on our own new opensource dataset of over 440,000 3D exemplars captured from varying viewpoints.At runtime, a 2.5D pointcloud captured from a single point of view is fed intothe CNN, which fills in the occluded regions of the scene, allowing grasps tobe planned and executed on the completed object. Runtime shape completion isvery rapid because most of the computational costs of shape completion areborne during offline training. We explore how the quality of completions varybased on several factors. These include whether or not the object beingcompleted existed in the training data and how many object models were used totrain the network. We also look at the ability of the network to generalize tonovel objects allowing the system to complete previously unseen objects atruntime. Finally, experimentation is done both in simulation and on actualrobotic hardware to explore the relationship between completion quality and theutility of the completed mesh model for grasping.
translated by 谷歌翻译