Download PDFOpen PDF in browserVanilla Probabilistic Autoencoder11 pages•Published: November 2, 2021AbstractThe autoencoder, a well-known neural network model, is usually fitted using a mean squared error loss or a cross-entropy loss. Both losses have a probabilistic interpretation: they are equivalent to maximizing the likelihood of the dataset when one uses a normal distribution or a categorical distribution respectively. We trained autoencoders on image datasets using different distributions and noticed the differences from the initial autoen- coder: if a mixture of distributions is used the quality of the reconstructed images may increase and the dataset can be augmented; one can often visualize the reconstructed im- age along with the variances corresponding to each pixel. The code which implements this method can be found at https://github.com/aciobanusebi/vanilla-probabilistic-ae.Keyphrases: autoencoder, deep learning, machine learning, matrix normal distribution, normal distribution, probabilistic distributions In: Yan Shi, Gongzhu Hu, Quan Yuan and Takaaki Goto (editors). Proceedings of ISCA 34th International Conference on Computer Applications in Industry and Engineering, vol 79, pages 71-81.
|