Hide menu

IDA Machine Learning Seminars - Fall 2019


Friday, September 27, 3.15 pm, 2019

Probabilistic machine learning for volatility
Martin Tegnér
, Information Engineering, Dept of Engineering Science, University of Oxford.
Abstract: This work is motivated by recent trends of rough volatility in finance. In place of these parametric models, we suggest to use a non-parametric class based on linear filters on stationary processes where the filter is randomised with a Gaussian process prior. We use variational methods to obtain a probabilistic representation of the filter that can be used for a distribution over the covariance function and its spectral content. We apply the approach to S&P 500 realised volatility data.
Location: Alan Turing (E-building)
Organizer: Mattias Villani


Wednesday, October 16, 3.15 pm, 2019

Scaling and Generalizing Approximate Bayesian Inference
David Blei
, Dept. of Computer Science, Columbia University
Abstract: A core problem in statistics and machine learning is to approximate difficult-to-compute probability distributions. This problem is especially important in Bayesian statistics, which frames all inference about unknown quantities as a calculation about a conditional distribution. In this talk I review and discuss innovations in variational inference (VI), a method a that approximates probability distributions through optimization. VI has been used in myriad applications in machine learning and Bayesian statistics. It tends to be faster than more traditional methods, such as Markov chain Monte Carlo sampling. After quickly reviewing the basics, I will discuss our recent research on VI. I first describe stochastic variational inference, an approximate inference algorithm for handling massive data sets, and demonstrate its application to probabilistic topic models of millions of articles. Then I discuss black box variational inference, a generic algorithm for approximating the posterior. Black box inference easily applies to many models but requires minimal mathematical work to implement. I will demonstrate black box inference on deep exponential families---a method for Bayesian deep learning---and describe how it enables powerful tools for probabilistic programming.
Location: Ada Lovelace (Visionen)
Organizer: Mattias Villani


Wednesday, November 6, 3.15 pm, 2019

Deep Generative Models and Missing Data
Jes Frellsen
, IT University of Copenhagen
Abstract: Deep latent variable models (DLVMs) combine the approximation abilities of deep neural networks and the statistical foundations of generative models. In this talk, we first discuss how these models are estimated: variational methods are commonly used for inference; however, the exact likelihood of these models has been largely overlooked. We show that most unconstrained models used for continuous data have an unbounded likelihood function and discuss how to ensure the existence of maximum likelihood estimates. Then we present a simple variational method, called MIWAE, for training DLVMs, when the training set contains missing-at-random data. Finally, we present Monte Carlo algorithms for missing data imputation using the exact conditional likelihood of DLVMs: a Metropolis-within-Gibbs sampler for DLVMs trained on complete datasets and an importance sampler for DLVMs trained on incomplete data sets. For complete training sets, our algorithm consistently and significantly outperforms the usual imputation scheme used for DLVMs. For incomplete training sets, we show that MIWAE trained models provide accurate single and multiple imputations, and are highly competitive with state-of-the-art methods. This is joint work with Pierre-Alexandre Mattei.
Location: Ada Lovelace (Visionen)
Organizer: Fredrik Lindsten






Page responsible: Fredrik Lindsten
Last updated: 2020-01-23