Hide menu

The LiU Seminar Series in Statistics and Mathematical Statistics



Tuesday, October 22, 3.15 pm, 2024. Seminar in Statistics.

Gaussian Whittle–Matérn fields on metric graphs
Jonas Wallin
, Department of Statistics, Lund University
Abstract: Random fields are popular models in statistics and machine learning for spatially dependent data on Euclidian domains. However, in many applications, data is observed on non-Euclidian domains such as street networks, or river networks. In this case, it is much more difficult to construct valid random field models. In this talk, we discuss some recent approaches to modeling data in this setting, and in particular define a new class of Gaussian processes on compact metric graphs. The proposed models, the Whittle–Matérn fields, are defined via a stochastic partial differential equation on the compact metric graph and are a natural extension of Gaussian fields with Matérn covariance functions on Euclidean domains to the non-Euclidean metric graph setting. We discuss various properties of the models, and show how to use them for statistical inference. Finally, we illustrate the model via an application to modeling traffic data. If time permits we will also discuss how to modify the processes that can be applied for directional networks.
Location: Alan Turing.

Tuesday, November 19, 3.15 pm, 2024. Seminar in Statistics.

Confidence distributions for the papameters in an autoregressive process
Rolf Larsson
, Department of Mathematics; Statistics, AI and Data Science, Uppsala University
Abstract: The notion of confidence distributions is applied to inference about the parameters in a autoregressive models, allowing the parameters to take values that make the models non stationary. This makes it possible to compare to asymptotic approximations in both the stationary and the non stationary cases at the same time. The main point, however, is to compare to a Bayesian analysis of the same problem. A non informative prior for a parameter, in the sense of Jeffreys, is given as the ratio of the confidence density and the likelihood. In this way, the similarity between the confidence and non-informative Bayesian frameworks is exploited. It is shown that, in the stationary case, asymptotically the so induced prior is flat. In the order one case, if a unit parameter is allowed, the induced prior has to have a spike at one of some size.
Simulation studies and empirical examples illustrate the ideas.
Location: Alan Turing.

Tuesday, December 3, 3.15 pm, 2024. Seminar in Statistics.

Computationally efficient uncertainty quantification for sparse Bayesian learning
Jan Glaubitz
, Department of Mathematics, Linköping University
Abstract: download
Location: Alan Turing.


Page responsible: Krzysztof Bartoszek
Last updated: 2025-02-25