Hide menu

IDA Machine Learning Seminars - Spring 2021


The IDA Machine Learning Seminars is a series of research presentations given by nationally and internationally recognized researchers in the field of machine learning.

• You can subscribe to the email list used for announcing upcoming seminars here.
• You can subscribe to the seminar series' calendar using this ics link.

Wednesday, February 24, 3.15 pm, 2021

Differentiating through Optimal Transport
Marco Cuturi, Google Brain and CREST - ENSAE, Institut Polytechnique de Paris, France

Abstract: Computing or approximating an optimal transport cost is rarely the sole goal when using OT in applications. In most cases this relies instead on approximating that plan (or its application to another vector) to obtain its differentiable properties w.r.t. to its input. I will present in this talk recent applications that highlight this necessity, as well as possible algorithmic and programmatic solutions to handle such issues.

Location: You can join the seminar via this Zoom link: https://liu-se.zoom.us/j/69240032654
Passcode: 326937
Organizer: Fredrik Lindsten

Wednesday, March 24, 3.15 pm, 2021

Target Aware Bayesian Inference: How to Beat Optimal Conventional Estimators
Tom Rainforth, University of Oxford, UK

Abstract: Standard approaches for Bayesian inference focus solely on approximating the posterior distribution. Typically, this approximation is, in turn, used to calculate expectations for one or more target functions—a computational pipeline that is inefficient when the target function(s) are known upfront. We address this inefficiency by introducing a framework for target-aware Bayesian inference (TABI) that estimates these expectations directly. While conventional Monte Carlo estimators have a fundamental limit on the error they can achieve for a given sample size, our TABI framework is able to breach this limit; it can theoretically produce arbitrarily accurate estimators using only three samples, while we show empirically that it can also breach this limit in practice. We utilize our TABI framework by combining it with adaptive importance sampling approaches and show both theoretically and empirically that the resulting estimators are capable of converging faster than the standard O(1/N) Monte Carlo rate, potentially producing rates as fast as O(1/N^2). We further combine our TABI framework with amortized inference methods, to produce a method for amortizing the cost of calculating expectations. Finally, we show how TABI can be used to convert any marginal likelihood estimator into a target aware inference scheme and demonstrate the substantial benefits this can yield.

Based on the paper of the same name by Rainforth, Golinski, Wood, and Zaidi, published in the Journal of Machine Learning Research 2020 and "Amortized Monte Carlo Integration" by Golinski, Wood, and Rainforth, ICML 2019 (Best Paper Honorable Mention).

Location: TBA
Organizer: Fredrik Lindsten

Wednesday, April 21, 3.15 pm, 2021

TBA
Rémi Bardenet, CNRS & CRIStAL, Université de Lille, France

Abstract: TBA
Location: TBA
Organizer: Fredrik Lindsten

Wednesday, May 19, 3.15 pm, 2021

TBA
Yura Malitsky, Linköping University

Abstract: TBA
Location: TBA
Organizer: Fredrik Lindsten



Past Seminars

Spring 2020   |   Fall 2019   |   Spring 2019   |   Fall 2018   |   Spring 2018   |   Fall 2017   |   Spring 2017    
Fall 2016   |   Spring 2016  |   Fall 2015   |   Spring 2015   |   Fall 2014



The seminars are typically held every fourth Wednesday at 15.15-16.15. Currently over Zoom. For further information, or if you want to be notified about the seminars by e-mail, please contact Fredrik Lindsten.


Page responsible: Fredrik Lindsten
Last updated: 2021-02-22