We present a practical way of introducing convolutional structure intoGaussian processes, making them more suited to high-dimensional inputs likeimages. The main contribution of our work is the construction of aninter-domain inducing point approximation that is well-tailored to theconvolutional kernel. This allows us to gain the generalisation benefit of aconvolutional kernel, together with fast but accurate posterior inference. Weinvestigate several variations of the convolutional kernel, and apply it toMNIST and CIFAR-10, which have both been known to be challenging for Gaussianprocesses. We also show how the marginal likelihood can be used to find anoptimal weighting between convolutional and RBF kernels to further improveperformance. We hope that this illustration of the usefulness of a marginallikelihood will help automate discovering architectures in larger models.
translated by 谷歌翻译