Hide menu

Abstract of Ph.d. thesis

Linköping Studies in Statistics

No 13

Characterizing Temporal Change and Inter-Site Correlation in Daily and Sub-daily Precipitation Extremes

Agné Burauskaité-Harju

Information on whether extremes is essential for risk awareness in planning of infrastructure and agriculture, and it may also play key role in our ability to adapt to recurrent or more or less unique extreme events. This thesis reports new statistical methodologies that can aid climate risk assessment under conditions of climate change. The increasing access to high temporal resolution of data is a central factor when developing novel techniques for this purpose. In particular, a procedure is introduced for analysis of long-term changes in daily and sub-daily records of observed or modelled weather extremes. Extreme value theory is employed to enhance the power of the proposed statistical procedure, and intersite dependence is taken into account to enable regional analyses. Furthermore, new methods are presented to summarize and visualize spatial patterns in the temporal synchrony and dependence of weather events such as heavy precipitation at a network of meteorological stations. The work also demonstrates the significance of accounting for temporal synchrony in the diagnostics of inter-site asymptotic dependence.

No 11

Monotonic regression for large multivariate datasets

Oleg Sysoev

Monotonic regression is a non-parametric statistical method that is designed especially for applications in which the expected value of a response variable increases or decreases in one or more explanatory variables. Such applications can be found in business, physics, biology, medicine, signal processing, and other areas. Inasmuch as many of the collected datasets can contain a very large number of multivariate observations, there is a strong need for efficient numerical algorithms. Here, we present new methods that make it feasible to fit monotonic functions to more than one hundred thousand data points. By simulation, we show that our algorithms have high accuracy and represent  considerable improvements with respect to computational time and memory requirements. In particular , we demonstrate how segmentation of a large-scale problem can greatly improve the performance of existing algorithms. Moreover, we show how the uncertainty of a monotonic regression model can be estimated. One of the procedures we developed can be employed to estimate the variance of the random error present in the observed response. Other procedures are based on resampling  techniques and can provide confidence intervals for the expected response at given levels of a set of predictors.

No 10

Roadmap for Trend Detection and Assessment of Data Quality

Karl Wahlin

Regular measurements of the state of the environment constitute a cornerstone of environmental management. Without the support of long time series of reliable data, we would know much less about changes that occur in the environment and their causes. The present research aimed to explore how improved techniques for data analysis can help reveal flawed data and extract more information from environmental monitoring programmes. Based on our results, we propose that the organization of such monitoring should be transformed from a system for measuring and collecting data to an information system where resources have been reallocated to data analysis. More specifically, this thesis reports improved methods for joint analysis of trends in multiple time series and detection of artificial level shifts in the presence of smooth trends. Furthermore, special consideration is given to methods that automatically detect and adapt to the interdependence of the collected data. The current work resulted in a roadmap describing the process of proceeding from a set of observed concentrations to arrive at conclusions about the quality of the data and existence of trends therein. Improvements in existing software accompanied the development of new statistical procedures.

No 9

Computer Experiments Designed to Explore and Approximate Complex Deterministic Models

Davood Shahsavan

Datorexperiment används allmänt för att undersöka hur tekniska, ekonomiska och ekologiska system reagerar på förändringar i tillförsel eller drivkrafter. Denna avhandling är inriktad på datorexperiment som kan hjälpa oss att bättre förstå beräkningar baserade på komplicerade numeriska modeller som bara är definierade av en datorkod. Huvuddelen av vårt arbete ägnades åt experiment som innefattar härledning och tillämpning av beräkningsmässigt billiga s.k. surrogatmodeller som ger nästan samma resultat som ursprungsmodellen. Vi utvecklade en adaptiv sekventiell designalgoritm som effektivt avslöjar icke-linjära reaktioner på ändrad input till modellen och vi integrerade denna algoritm med metoder for att prediktera modellens output för nya indata. Jämfört med de metoder som nu används har vår algoritm fördelen att den inte ställer några krav på förhandsinformation om modellens struktur. Speciellt noterade vi att den fungerar tillfredsställande även om olika delar av modellens
responsyta har helt olika statistiska egenskaper. Varians-baserad känslighetsanalys är en väl etablerad teknik för att belysa modellers output, men den kan leda till höga datorkostnader eftersom den kräver många modellkörningar. Surrogatmodeller kan i sådana fall underlätta analysen. Om var sekventiella designalgoritm utnyttjas, kan man dessutom få viktig information om både linjära och icke-linjära effekter av förändringar i modellens indata.
Experiment som innefattade upprepade körningar av en modell för flödet av vatten och kväve genom ett avrinningsområde visade att man kan klarlägga det centrala i stora komplexa modeller. Dessutom visade vår forskning att beräkningsmässigt billiga surrogatmodeller erbjuder en idealisk grund för beslutstöd och lärandeprocesser, eftersom de kan ge en nästan omedelbar respons på de data som användaren matar in i modellen.

Page responsible: Director of Graduate Studies