Colloquium: "Revisiting latent variable models from a deep learning perspective"

Speaker: Xiao Wang, Purdue University

Abstract: Latent variable models are an indispensable and powerful tool for uncovering the hidden structure of observed data for image analysis. The well-known latent variable models in statistics include linear mixed models and Gaussian mixture models. The powerful generative models in machine learning such as GANs and VAEs also belong to latent variable models. Existing methods often suffer from two major challenges in practice: (a) a proper latent variable distribution is difficult to be specified; (b) making an exact likelihood inference is formidable due to the intractable computation. We propose a novel framework for the inference of latent variable models that overcomes these two limitations. This new framework allows for a fully data-driven latent variable distribution via deep neural networks, and the proposed stochastic gradient method, combined with the Langevin algorithm, is efficient and suitable for complex models and big data. We provide theoretical results for the Langevin algorithm and establish the convergence analysis of the optimization method. This framework has demonstrated superior practical performance through simulation studies and a real data analysis.

Bio: Dr. Xiao Wang is Professor of Statistics at Purdue University. Dr. Wang obtained his Ph.D. from University of Michigan. His research interests focus on machine learning, nonparametric statistics, and functional data analysis. Dr. Wang has published his research works in top journals including AOS, JASA, JRSSB, Biometrika, and top conferences such as NeurIPS and ICLR. Dr. Wang is the elected fellow of the Institute of Mathematical Statistics (IMS) and the elected fellow of the American Statistical Association (ASA). Currently, Dr. Wang is serving as Associate Editors of JASA, Technometrics, and Lifetime Data Analysis.  

Host: Jose Figueroa-Lopez