The LiU Seminar Series in Statistics and Mathematical Statistics
Tuesday, September 9, 1.30 pm (NEW TIME), 2025. Seminar in Statistics.
The use of fossils for Bayesian phylogenetic inferenceJoëlle Barido-Sottani, Institute of Biology, ENS
Abstract: Phylogenetic inference is used to reconstruct the evolutionary relationships between different species or groups, in the form of the Tree of Life. Phylogenetic trees are also used to study the diversification process which has led to the current biodiversity. In particular, we can use these trees to estimate speciation and extinction rates, and the influence of different factors (morphology, climatic changes, etc) on the rates. While phylogenetic inference was originally focused mostly on living species, it is becoming increasingly clear that we need to include information from the fossil record to obtain an accurate picture of past evolutionary dynamics. In this talk, I will show how Bayesian phylogenetic inference has expanded to fully integrate fossil samples into analyses of the evolutionary process, and I will discuss some of the current challenges faced by this type of inference.
Location: Alan Turing .
Tuesday, October 28, 1.30 pm (NEW TIME), 2025. Seminar in Statistics.
A Framework for Detecting Structural Heterogeneity in Latent Variable ModelsGabriel Wallin, School Of Mathematical Sciences, Lancaster University
Abstract: download
Location: Alan Turing .
Tuesday, November 25, 3.15 pm, 2025. Seminar in Mathematical Statistics.
Misclassification probability approximation for a quadratic classifier with repeated measurementsJean de Dieu Niyigena, University of Rwanda
Abstract: In classification, any decision rule involves a risk of misclassification. We study this problem for two populations with unequal covariance matrices and repeated measurements. A quadratic classification rule is derived for the growth curve model, and the unknown parameters are estimated. Using these estimators, we compute the moments of the quadratic classifier and employ an Edgeworth-type expansion to approximate the misclassification probabilities. Numerical simulations compare the probabilities obtained using true and estimated mean parameters, as well as those computed via Monte Carlo simulations. The results show that misclassification decreases as the separation between the two groups grows, demonstrating improved classification accuracy for well-separated populations.
Location: Komp.
Tuesday, December 9, 1.30 pm (NEW TIME), 2025. Seminar in Statistics.
A comparative study of some gradient based and gradient free methods for constructing optimal approximate designsAkram Mahmoudi, Department of Statistics, Stockholm University
Abstract: Owing to the widespread application of optimal designs, we are motivated to compare different methods of constructing optimum designs. In this regard, some optimization algorithms are considered to construct approximate optimal designs. Some of these methods are gradient based while some others are gradient free. The studied methods include a class of multiplicative algorithms, simulated annealing and Nelder-Mead with the barrier method. Optimization algorithms are explored through iterations for deterministic methods and simulations for stochastic methods, depending on the approach used. These algorithms are investigated across various models, including the quadratic model, cubic model, quartic model, a practical model in chemistry, and models with two and three design variables. A comprehensive behavioral analysis of optimization algorithms, focusing on iterative and simulative approaches, with key findings from these methods are highlighted. Additionally, the strengths and weaknesses of the methods are analyzed, and a comparison between them is performed. Based on our research, the multiplicative algorithm is the best choice if the gradient is available due to its faster and more accurate performance than Nelder-Mead and simulated annealing and ease of implementation. Gradient-free methods like Nelder-Mead and simulated annealing should be used if finding gradient is difficult or impossible. Although simulated annealing is slower and component-sensitive, it is more accurate. What makes one algorithm better than another depends on the needs of the optimization problem.
Location: Alan Turing .
Past Seminars
Spring 2025 | Fall 2024 | Spring 2024 | Fall 2023 | Spring 2023 | Fall 2022 | Spring 2022 |Fall 2021 | Spring 2021 | Fall 2020 | Spring 2020 | Fall 2019 | Spring 2019 | Fall 2018 |
Spring 2018 | Fall 2017 | Spring 2017 | Fall 2016 | Spring 2016 | Fall 2015 | Spring 2015 |
Fall 2014 | Spring 2014 | Fall 2013 | Spring 2013 | Fall 2012 | Spring 2012 | Fall 2011
Page responsible: Krzysztof Bartoszek
Last updated: 2025-11-25
