Neural Networks with Applications to Vision and Language2016HT
* Basics of Machine Learning
* Feed-Forward Networks
* Convolutional Networks
* Recurrent and Recursive Networks
* Applications in Natural Language Processing
* Applications in Computer Vision
Doctoral students with an interest in neural networks and how to apply them to their own research
The course was last given
The course was never given before.
On successful completion of the course the student will be able to:
* explain different network architectures and how these are used in current applications,
* implement, train, and evaluate neural networks using existing software libraries,
* present and critically assess current research on neural networks and their applications,
* relate the concepts and techniques introduced in the course to the student's own research,
* plan and carry out a research project on neural networks within given time limits.
* basic calculus (derivatives)
* basic linear algebra (matrices, vectors)
* basic probability and statistics
* programming experience in Java, MATLAB, and/or Python
Gaps in these prerequisites may be filled by teacher-assisted self-study before the start of the course; contact the examiner for details.
Basics of machine learning (regression, classification, numerical optimisation). Feed-forward networks. Loss functions. Backpropagation training. Regularisation. Convolutional networks. Recurrent and recursive networks. Processing sequences, images, and hierarchical structures. Applications of neural networks in natural language processing and computer vision. Current areas of research.
The course is organised in three parts. The first part consists of lectures presenting basic concepts and methods in neural networks, as well as applications in natural language processing and computer vision. This part also includes a number of lab sessions that will give students practical experience in implementing, training, and evaluating neural networks using existing software libraries. The second part of the course is a series of seminars where students present and discuss one or more research articles. The third part is an individual project that the students choose based on their own research interests.
* Christopher M. Bishop. Neural Networks for Pattern Recognition. Oxford
University Press, 1996.
* Simon O. Haykin. Neural Networks and Learning Machines. Third edition. Prentice Hall, 2008.
* Yoshua Bengio, Ian J. Goodfellow, and Aaron Courville. Deep Learning. Book in preparation for MIT Press, 2015.
* Yoav Goldberg. A Primer on Neural Network Models for Natural Language Processing. CoRR, abs/1510.00726, 2015.
* Marco Kuhlmann (IDA)
* Michael Felsberg (ISY)
Marco Kuhlmann (IDA)
* lab assignments (1.5 credits)
* active participation in the research seminar (1.5 credits)
* individual project (3 credits)
Natural Language Processing Laboratory (IDA), Computer Vision Laboratory (ISY)
This proposal has also been submitted to the Faculty for Science and Engineering for consideration as a faculty-level course.
Page responsible: Director of Graduate Studies
Last updated: 2012-05-03