Shepherding Distributions for parallel Markov Chain Monte Carlo
Jermaine, Christopher M.
Master of Science
One of the major concerns for Markov Chain Monte Carlo (MCMC) algorithms is that they can take a long time to converge to the desired stationary distribution. In practice, MCMC algorithms may take to millions of iterations to converge to the target distribution, requiring a wall clock time measured in months. This thesis presents a general algorithmic framework for running MCMC algorithms in a parallel/distributed environment, that can result in faster burn-in leading to convergence to the target distribution. Our framework, which we call the method of "shepherding distributions", relies on the introduction of an auxiliary distribution called a shepherding distribution (SD) that uses several MCMC chains running in parallel. These chains collectively explore the space of samples, communicating via the shepherding distribution, to reach high likelihood regions faster. We consider various scenarios where shepherding distributions can be used, including the case where several machines or CPU cores work on the same data in parallel (the so-called transition parallel application of the framework) and the case where a large data set itself can be partitioned across several machines or CPU cores and various chains work on subsets of the data (the so-called data parallel application of the framework). This latter application is particularly useful in solving "big data" Machine Learning problems. Experiments under both scenarios illustrate the effectiveness of our shepherding approach to MCMC parallelization.
Markov Chain Monte Carlo, Shepherding, Parallel MCMC, Burn-in, Bayesian, Likelihood