We introduce a deep, generative autoencoder capable of learning hierarchiesof distributed representations from data. Successive deep stochastic hiddenlayers are equipped with autoregressive connections, which enable the model tobe sampled from quickly and exactly via ancestral sampling. We derive anefficient approximate parameter estimation method based on the minimumdescription length (MDL) principle, which can be seen as maximising avariational lower bound on the log-likelihood, with a feedforward neuralnetwork implementing approximate inference. We demonstrate state-of-the-artgenerative performance on a number of classic data sets: several UCI data sets,MNIST and Atari 2600 games.
translated by 谷歌翻译