732A36 Theory of Statistics
This course is about the theoretical concepts of statistical inference. Statistical inference is the area of statistical science that is concerned with drawing conclusions about the underlying population of study from obtained observations from that population. The general case would be a random sample of observations from the population, but in practice observations do not initially constitute a random sample per definition.
Statistical inference may be divided into three main parts, point estimation, interval estimation and hypothesis testing, but these parts naturally come together when the inferential procedure is synthesized.
Point estimation is about finding the most proper approximative value of one or more population parameters, so-called (point) estimators. In the search for an estimator, optimal in some sense, concepts as unbiasedness, consistence, efficiency, sufficiency and completeness need to be investigated. Further a number of more structured methods for finding point estimators will be taken up, of which the method of Maximum Likelihood is the most important.
Interval estimation is about finding an interval for a population parameter to which it belongs with a certain degree of confidence. To find such intervals properties of corresponding point estimators and/or the underlying population come into necessary use.
Hypothesis testing is ususally the final step in the inferential procedure in which statements about the population and its parameters should be tested for validity. There is in some parts a kind of duality between interval estimation and hypothesis testing, but very often hypothesis testing is made without any connection to interval estimation. This is especially the case for nonparametric inference, where assumptions about the population are relaxed. Further, like for point estimation, there are a number of properties of a particular test that should be investigated in order to select the most proper test for a specific situation.
In the historical development of the statistical science two main schools of statistical inference has emerged, the classical school and the Bayesian school. The difference them between lies mainly in the way prior assumptions about the population are included in the inferential procedure. In this course both schools will be dealt with.
The course will to a large extent be analytical-theoretical in its instructions and exercises. We will however also treat more modern methods of inference where analytical expressions for different problems of estimation and hypothesis testing are not feasible. So-called computational intensive methods comprise concepts like the bootstrap, the jackknife, cross-validation and Gibbs sampling. These methods are not less theoretical in their structure, but problems will almost always need solutions by computer programming.
The teaching of this course will be in form of lectures/tutorials (indicated as "lecture" in the timetable) and three problem seminars, the latter in which the course attendants are expected to show and discuss solutions to exercises.
- A number of more complex assignments, comprising solutions with the help of computer programming.
- A written final exam
Garthwaite P.H., Jolliffe I.T. and Jones B. (2002) Statistical Inference 2nd ed. Oxford University Press, Oxford. ISBN 0-19-857226-3
November 11, 2011 at 08.15 in room John von Neumann
Course leader and tutor:
Anders Nordgaard, E-mail: Anders.Nordgaard@liu.se
The contents and teaching of the course takes into account knowledge of probability theory corresponding with the contents of the course Probability Theory, 6 credits within the Master's Programme in Statistics, Data Analysis and Knowledge Discovery,
Page responsible: Anders Nordgaard
Last updated: 2011-10-18