Hide menu

IDA Machine Learning Seminars - Spring 2024

The IDA Machine Learning Seminars is a series of research presentations given by nationally and internationally recognized researchers in the field of machine learning.

• You can subscribe to the email list used for announcing upcoming seminars here.
• You can subscribe to the seminar series' calendar using this ics link.

Friday, March 15, 10:30, 2024

Learning Paradigms for Timeseries (TS) and SpatioTemporal (ST) Data (and Tasks):
Towards Generative AI for TS and ST
Flora Salim, Professor in the School of Computer Science and Engineering, University of New South Wales (UNSW)

Abstract: Abstract: The initial release of ChatGPT that has seen a worldwide uptake of over than 100 million in just two months after its launch in November 2022. There were several key milestones leading to the development of these foundation models, which underpin the ChatGPT technology, including the introduction of Transformers architecture and the self-supervised learning paradigm. How has the underpinning technologies been applied in the pervasive computing domain, such as for human behaviour modelling, and traffic and weather forecasting? Access to annotated human behaviour data has been expensive and often infeasible. This demands new ways for modelling behaviours at scale, moving away from discriminative, fully-supervised learning approaches, and from narrow tasks. The heterogeneity of both the data sources and the downstream tasks, as well as lack of annotations, makes self-supervised learning to be a compelling choice, as they require no labelled data and can be made compact and generalisable. I will present our self-supervised learning (SSL) pretraining approaches for multimodal sensor data, and also recent works on multimodal self-supervision. I will show why Transformer architecture, designed for sequence-to-sequence modelling, with multi-head attention mechanism, is a perfect fit for time-series data. When combined with Graph structure, it becomes a powerful combo for spatiotemporal prediction tasks. I will also present our works on leveraging Large Language Models (LLMs) for time-series modelling, such as for traffic forecasting and energy demand forecasting, using natural language prompts. Finally, I will discuss open issues around these models, including fairness and explainability, and present our ongoing projects to address them.

Bio: Flora Salim is a Professor in the School of Computer Science and Engineering (CSE), the inaugural Cisco Chair of Digital Transport & AI, University of New South Wales (UNSW) Sydney, and the Deputy Director (Engagement) of UNSW AI Institute. Her research is on machine learning for time-series and multimodal sensor data and on trustworthy AI. She has received several prestigious fellowships including Humboldt-Bayer Fellowship, Humboldt Fellowship, Victoria Fellowship, and ARC Australian Postdoctoral (Industry) Fellowship. She was a recipient of the Women in AI Awards 2022 Australia and New Zealand (Defence and Intelligence Category). She has worked with many industry and government partners, and managed large-scale research and innovation projects, leading to several patents and deployed systems. She is a member of the Australian Research Council (ARC) College of Experts. She has served as a Senior Area Chair / Area Chair of AAAI, WWW, NeurIPS, and many other top-tier conferences in AI and ubiquitous computing. She is an Editor of IMWUT, Associate-Editor-in-Chief of IEEE Pervasive Computing, Associate Editor of ACM Transactions on Spatial Algorithms and Systems, a Steering Committee member of ACM UbiComp, and an Associate of ELLIS Alicante.

Location: Alan Turing

Wednesday, March 20, 15:15, 2024

Can neural ODEs forecast global weather?
Markus Heinonen, Department of Computer Science, Aalto University

Abstract: Neural ODEs have surfaced in the last decade as a new perspective on modelling dynamics by learning the time-derivative that drives the system evolution forward as a neural network. While succesful on systems of limited complexity, large-scale demonstrations have been lacking. Recently large autoregressive transformer models have been developed with breakthrough global weather forecasting performances, but at times with little consideration of the underlying dynamics. We consider global weather as a continuous-time PDE with mass-preserving dynamics, and show how simple convolution networks can achieve state-of-the-art weather prediction performance with just a few million parameters. This talk is based on the paper ClimODE: Climate Forecasting With Physics-informed Neural ODEs accepted for ICLR24 oral presentation.

Bio: Markus Heinonen is an Academy Research Fellow at Aalto University, Finland with a PhD from University of Helsinki in 2013. His research interests are centered on probabilistic machine learning with emphasis on understanding uncertainty of deep learning with Bayesian perspectives to neural networks. In addition he has worked on learning ODEs and PDEs with neural networks and Gaussian processes.

Location: Ada Lovelace

Wednesday, April 17, 15:15, 2024

Uncertainty Estimation for Independent and Non-Independent Data
Bertrand Charpentier, Cofounder & Chief Scientist at Pruna AI

Abstract: Both practical and theoretical reasons justify why we need uncertainty estimation to build reliable machine learning models. While uncertainty estimation is expected to provide trust, safety, fairness and facilitate maintenance in real-world applications, uncertainty estimation is also highly required to represent the real physical world which is inherently non-deterministic and only partially observable. Furthermore, machine learning models must deal with various types of input data (e.g. tabular, images, graph data, sequential data) and output data (classes, real values, counts, time events) which can be assumed either independent or non-independent. While the independence assumption is convenient to represent various data types, the non-independence assumption is particularly useful to represent complex data types with network effects or time effects. In this dissertation, we look at uncertainty estimation for both independent and nonindependent data. To this end, we elaborate on three main aspects. (1) We propose desiderata capturing the desired behavior of uncertainty estimation. These desiderata cover both aleatoric and epistemic uncertainty in the presence of perturbations - in particular adversarial perturbations -, as well as network effects, or time effects. Further, we analyze the desired behavior for uncertainty estimates at both training and testing time. (2) We present a large family of new Bayesian models which provide uncertainty estimates at a low practical cost. These models demonstrate strong empirical performance and have theoretical guarantees for different data types. (3) We develop extensive metrics to evaluate uncertainty estimates for practical tasks. These experimental setups cover correct/wrong prediction detection, Out-Of-Distribution (OOD) detection, dataset shifts, and calibration metrics in the presence of (adversarial) perturbations, network effects, or time effects. Finally, we analyze the benefit of using uncertainty estimates to achieve good exploration/exploitation trade-off with high sample efficiency.

Location: Alan Turing

Wednesday, May 15, 15:15, 2024

Towards Causal Deep Generative Models for Sequential Data
Yingzhen Li, Department of Computing, Imperial College London

Abstract: One of my research dreams is to build a high-resolution video generation model that enables granularity controls in e.g., the scene appearance and the interactions between objects. I tried, and then realised the need of me inventing deep learning tricks for this goal is due to the issue of non-identifiability in my sequential deep generative models. In this talk I will discuss our research towards developing identifiable deep generative models in sequence modelling, and share some recent and on-going works regarding switching dynamic models. Throughout the talk I will highlight the balance between causality "Theorist" and deep learning "Alchemist", and discuss my opinions on the future of causal deep generative modelling research.

Location: Alan Turing

Future Seminars

Fall 2024

Past Seminars

Fall 2023   |   Spring 2023   |   Fall 2022   |   Spring 2022   |   Spring 2021   |   Spring 2020   |   Fall 2019   |   Spring 2019   |   Fall 2018   |   Spring 2018   |   Fall 2017   |   Spring 2017   |   Fall 2016   |   Spring 2016  |   Fall 2015   |   Spring 2015   |   Fall 2014

The seminars are typically held every fourth Wednesday at 15.15-16.15 in Alan Turing.
For further information, or if you want to be notified about the seminars by e-mail, please contact Fredrik Lindsten.

Page responsible: Fredrik Lindsten
Last updated: 2024-05-08