Hide menu

IDA Machine Learning Seminars - Fall 2023


The IDA Machine Learning Seminars is a series of research presentations given by nationally and internationally recognized researchers in the field of machine learning.

• You can subscribe to the email list used for announcing upcoming seminars here.
• You can subscribe to the seminar series' calendar using this ics link.


Wednesday, September 13, 15:15, 2023

Title: Reparametrization invariance in representation learning
Speaker: Søren Hauberg , Professor, Section for Cognitive Systems, Technical University of Denmark

Abstract: Generative models learn a compressed representation of data that is often used for downstream tasks such as interpretation, visualization and prediction via transfer learning. Unfortunately, the learned representations are generally not statistically identifiable, leading to a high risk of arbitrariness in the downstream tasks. We propose to use differential geometry to construct representations that are invariant to reparametrizations. We demonstrate that the approach is deeply tied to the uncertainty of the representation, and that practical applications require high-quality uncertainty quantification. With the reparametrization problem solved, we show how the geometric representations reveals signals in biological data that were otherwise hidden, and how the representations supports applications in robotics.

Location: Ada Lovelace


Wednesday, November 8, 15:15, 2023

Title: Moment matching denoising Gibbs sampling
Speaker: Brooks Paige, Associate Professor, University College London

Abstract: Energy-based models offer a versatile framework for modeling complex data distributions. However, training and sampling from energy-based models poses significant challenges. The widely-used denoising score matching method for training energy-based models suffers from inconsistency issues, causing the energy model to learn a `noisy' data distribution. In this talk I will describe a proposal for an alternative sampling framework, (pseudo)-Gibbs sampling with moment matching, which enables effective sampling from the underlying clean model when given a `noisy' model trained via denoising score matching. We explore the benefits of the approach compared to related methods, how to scale to high-dimensional data, and connections to diffusion models.

Location: Ada Lovelace


Wednesday, November 15, 15.15 pm, 2023

Title: Deep learning-based estimation of time-dependent parameters in Markov models with application to SDEs - machine learning aspects and experiments details
Speaker: Pawel Morkisz, Professor, AGH University of Krakow

Abstract: We present a novel method for estimation of time-dependent unknown parameters based on discrete sampling of Markov processes using deep learning techniques. Neural networks have enabled a variety of applications. Usually, we mean machine learning, in which a computer learns to perform some task by analyzing training examples and directly having access to the predicted values. In this work, we employ the deep learning framework to solve the problem of approximation of time-dependent parameters from the actual data. The idea is to change this approximation task to an optimization problem using the maximum likelihood approach and then obtain the appropriate loss function, which can be used to train neural networks. We demonstrate the effectiveness of our approach through a series of numerical experiments using the Deep Learning framework -- TensorFlow. We focus on estimating parameters in multivariate regression and stochastic differential equations (SDEs). Moreover, we base this approach on theoretical results in the SDEs case - we prove that under certain conditions, the solution process of the underlying SDE with the actual parameter function is close to the SDE with the parameter function obtained from our neural network.

Location: Alan Turing


Wednesday, December 6, 15:15, 2023

Title: Pure Exploration in Bandits with Linear Constraints
Speaker: Devdatt Dubhashi, Professor, Chalmers University of Technology

Abstract: We address the problem of identifying the optimal policy with fixed confidence in a multi-armed bandit setup, when the arms are subject to linear constraints. Unlike the standard best-arm identification problem which is well studied, the optimal policy in this case may not be deterministic and could mix between several arms. This changes the geometry of the problem which we characterize via an information-theoretic lower bound. We introduce two asymptotically optimal algorithms for this setting, one based on the Track-and-Stop method and the other based on a game-theoretic approach. Both these algorithms try to track an optimal allocation based on the lower bound and computed by a weighted projection onto the boundary of a normal cone. Finally, we provide empirical results that validate our bounds and visualize how constraints change the hardness of the problem.

Location: Ada Lovelace




The seminars are typically held every fourth Wednesday at 15.15-16.15 in Ada Lovelace.
For further information, or if you want to be notified about the seminars by e-mail, please contact Fredrik Lindsten or Sourabh Balgi.


Page responsible: Fredrik Lindsten
Last updated: 2024-01-12