Computational Mathematics & Statistics Seminar by Sifan Liu: Quasi-Monte Carlo Quasi-Newton for Variational Bayes

Time

-

Locations

Online event

Speaker:

Sifan Liu, PHD, statistics department, Stanford University

Title:

Quasi-Monte Carlo Quasi-Newton for Variational Bayes

Abstract: 

Many machine learning problems optimize an objective that must be measured with noise. The primary method is a first order stochastic gradient descent (SGD) using one or more Monte Carlo (MC) samples at each step. One problem is that the Monte Carlo error in the gradient estimator translates directly into the error in the final solution. Moreover, SGD can converge slowly when the Hessian is ill-conditioned, in which case quasi-Newton methods like L-BFGS are more effective. This talk discusses how to use randomized quasi-Monte Carlo (RQMC) in the place of MC sampling in L-BFGS. When MC sampling has a root mean squared error (RMSE) of $O(n^{-1/2})$ then RQMC has an RMSE of $o(n^{-1/2})$ that can be close to $O(n^{-3/2})$ in favorable settings. We prove that improved sampling accuracy translates to improved optimization. We show empirical investigations for variational Bayes, where using RQMC with stochastic L-BFGS greatly speeds up the optimization and sometimes finds a better parameter value than MC does. 


 

Computational Mathematics & Statistics
 

Tags:

Getting to Campus