Download PDFOpen PDF in browserUncertainty Estimates in Deep Generative Models using Gaussian ProcessesEasyChair Preprint 444812 pages•Date: October 20, 2020AbstractWe propose a new framework to estimate the uncertainty of deep generative models. In real-world applications, uncertainty allows us to evaluate the reliability of the outcome of machine learning systems. Gaussian processes are widely known as a method in machine learning which provides estimates of uncertainty. Moreover, Gaussian processes have been shown to be equivalent to deep neural networks with infinitely wide layers. This equivalence suggests that Gaussian process regression can be used to perform Bayesian prediction with deep neural networks. However, existing Bayesian treatments of neural networks via Gaussian processes have only been applied so far to supervised learning; we are not aware of any work using neural networks and Gaussian processes for unsupervised learning. We extend the Bayesian Gaussian process latent variable model, an unsupervised learning method using Gaussian processes, and propose a Bayesian deep generative model by approximating the expectations of complex kernels. With a series of experiments, we validate that our method provides estimates of uncertainty from the relevance between variance and the output quality. Keyphrases: Bayesian learning, Gaussian Process Latent Variable Model, Gaussian process, deep learning, neural network
|