Hide menu

IDA Machine Learning Seminars - Spring 2021

The IDA Machine Learning Seminars is a series of research presentations given by nationally and internationally recognized researchers in the field of machine learning.

• You can subscribe to the email list used for announcing upcoming seminars here.
• You can subscribe to the seminar series' calendar using this ics link.

Wednesday, February 24, 3.15 pm, 2021

Differentiating through Optimal Transport
Marco Cuturi, Google Brain and CREST - ENSAE, Institut Polytechnique de Paris, France

Abstract: Computing or approximating an optimal transport cost is rarely the sole goal when using OT in applications. In most cases this relies instead on approximating that plan (or its application to another vector) to obtain its differentiable properties w.r.t. to its input. I will present in this talk recent applications that highlight this necessity, as well as possible algorithmic and programmatic solutions to handle such issues.

Location: You can join the seminar via this Zoom link: https://liu-se.zoom.us/j/69240032654
Passcode: 326937
Organizer: Fredrik Lindsten

Wednesday, March 24, 3.15 pm, 2021

Target Aware Bayesian Inference: How to Beat Optimal Conventional Estimators
Tom Rainforth, University of Oxford, UK

Abstract: Standard approaches for Bayesian inference focus solely on approximating the posterior distribution. Typically, this approximation is, in turn, used to calculate expectations for one or more target functions—a computational pipeline that is inefficient when the target function(s) are known upfront. We address this inefficiency by introducing a framework for target-aware Bayesian inference (TABI) that estimates these expectations directly. While conventional Monte Carlo estimators have a fundamental limit on the error they can achieve for a given sample size, our TABI framework is able to breach this limit; it can theoretically produce arbitrarily accurate estimators using only three samples, while we show empirically that it can also breach this limit in practice. We utilize our TABI framework by combining it with adaptive importance sampling approaches and show both theoretically and empirically that the resulting estimators are capable of converging faster than the standard O(1/N) Monte Carlo rate, potentially producing rates as fast as O(1/N^2). We further combine our TABI framework with amortized inference methods, to produce a method for amortizing the cost of calculating expectations. Finally, we show how TABI can be used to convert any marginal likelihood estimator into a target aware inference scheme and demonstrate the substantial benefits this can yield.

Based on the paper of the same name by Rainforth, Golinski, Wood, and Zaidi, published in the Journal of Machine Learning Research 2020 and "Amortized Monte Carlo Integration" by Golinski, Wood, and Rainforth, ICML 2019 (Best Paper Honorable Mention).

Location: TBA
Organizer: Fredrik Lindsten

Wednesday, April 21, 3.15 pm, 2021

Monte Carlo integration with repulsive point processes
Rémi Bardenet, CNRS & CRIStAL, Université de Lille, France

Abstract: Monte Carlo integration is the workhorse of Bayesian inference, but the mean square error of Monte Carlo estimators decreases slowly, typically as 1/N, where N is the number of integrand evaluations. This becomes a bottleneck in Bayesian applications where evaluating the integrand can take tens of seconds, like in the life sciences, where evaluating the likelihood often requires solving a large system of differential equations. I will present two approaches to faster Monte Carlo rates using interacting particle systems. First, I will show how results from random matrix theory lead to a stochastic version of Gaussian quadrature in any dimension d, with mean square error decreasing as 1/N^{1+1/d}. This quadrature is based on determinantal point processes, which can be argued to be the kernel machine of point processes. Second, I will show how to further take this error rate down assuming the integrand is smooth. In particular, I will give a tight error bound when the integrand belongs to any arbitrary reproducing kernel Hilbert space, using a mixture of determinantal point processes tailored to that space. This mixture is reminiscent of volume sampling, a randomized experimental design used in linear regression.

Joint work with Adrien Hardy, Ayoub Belhadji, Pierre Chainais

Passcode: 742124
Organizer: Fredrik Lindsten

Wednesday, May 19, 3.15 pm, 2021

Yura Malitsky, Linköping University

Abstract: TBA
Location: TBA
Organizer: Fredrik Lindsten

Past Seminars

Spring 2020   |   Fall 2019   |   Spring 2019   |   Fall 2018   |   Spring 2018   |   Fall 2017   |   Spring 2017    
Fall 2016   |   Spring 2016  |   Fall 2015   |   Spring 2015   |   Fall 2014

The seminars are typically held every fourth Wednesday at 15.15-16.15. Currently over Zoom. For further information, or if you want to be notified about the seminars by e-mail, please contact Fredrik Lindsten.

Page responsible: Fredrik Lindsten
Last updated: 2021-04-12