Neural Networks and Deep Learning

PhD Course, 3 + 3 credits, Autumn 2019.
Instructors: Michael Felsberg (ISY), Marco Kuhlmann (IDA)

Over the past few years, neural networks have enjoyed a major resurgence in machine learning, and today yield state-of-the-art results in various fields. This course provides an introduction to deep neural network models, and surveys some the applications of these models in areas where they have been particularly successful. The course covers feedforward networks, convolutional networks, recurrent and recursive networks, as well general topics such as input encoding and training techniques. The course also provides acquaintance with some of the software libraries available for building and training deep neural networks.

Short link:



Gaps in these prerequisites may be filled by teacher-assisted self-study before the start of the course; contact the instructors for details.

Intended learning outcomes

On successful completion of the course you should be able to:


Basics of machine learning (regression, classification, numerical optimisation). Feedforward networks. Loss functions. Back-propagation training. Regularisation. Convolutional networks. Recurrent and recursive networks. Processing sequences, images, and hierarchical structures. Applications of neural networks. Current areas of research.

Course literature

The main book for the course is:

Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep Learning. MIT Press, 2016.

Additional reading consists of excerpts from the following books:



The lectures present basic concepts and methods in neural networks and survey applications. Each lecture connects to one or several chapters in the book by Goodfellow, Bengio, and Courville (GBC). The course schedule indicates which chapters you should have read before each lecture.


The labs give you practical experience in implementing, training, and evaluating neural networks using existing software libraries. There are a total of four labs:

Lab 0 is an optional preparatory lab that you do on-site. For each of the remaining labs there is an on-site introduction by an instructor, but most of the actual work is self-scheduled.

To do lab 1 and lab 2, you will have to bring your own computer. There are two options:

Using your own software stack. If you want to do labs 1 and 2 using your own software stack, you will need to install the following libraries: Python 3, Jupyter Notebook, NumPy, and Keras. We recommend installing this software in a virtual environment using Virtualenv or Anaconda. It is a good idea to try the installation before the start of the course. If you run into problems, you can get help with the installation during lab 0.

Using Azure Notebooks. If you do not want to or are unable to install the above software stack on your own computer, you can also do the labs on Azure Notebooks. Sign in to the service using your LiU-ID, create a new project, and upload the lab code that is provided together with the instructions (linked above). Please note that training networks on Azure’s Free Compute will most likely be slower than the self-powered option, at least on a modern computer.

Lab 3 needs to be done on dedicated computers provided on campus.

Individual project

The second (optional) part of the course is an individual project that you formulate together with your PhD supervisor. We anticipate the typical project to be one where you apply neural networks to a concrete problem related to your thesis work. To be acceptable for the course, the project must contain a substantial amount of machine learning; in case of doubt about this, your supervisor is welcome to consult with us (the instructors).

Here is the process for the individual project:

To present your results, you must register for a seminar slot by sending an email to Marco. The email should contain:

The deadline for the registration of seminar slots is 6 December.


The examination for the course consists of two parts:

Based on whether or not you choose to do the project, you will get either 6 credits or 3 credits for this course.

Dates relevant to the examination:

Lab assignments submitted after the first due date will be graded after the deadline for late submissions in January.

To meet a due date or deadline, it suffices to submit the assignment before 08:00 the first working day after that date.


The lectures, lab sessions, and seminars take place in the B-building and the E-building at Campus Valla.

The lectures will be videotaped, and edited videos will be made available via this link (requires login with LiU-ID).

DayBefore sessionSession
15/10 13:15–15 Read GBC, chapters 1–5 Lecture 0: Machine Learning Basics.
Location: Ada Lovelace
15/10 15:15–17 Install software Lab session 0.
Location: John von Neumann
21/10 8:15–10 Read GBC, chapters 6–8 Lecture 1: Deep Feedforward Networks.
Location: Ada Lovelace
23/10 13:15–17 Read instructions for lab 1 Lab session 1.
Location: John von Neumann
5/11 8:15–10 Read GBC, chapter 10 Lecture 2: Recurrent and Recursive Networks.
Location: Ada Lovelace
7/11 13:15–17 Read instructions for lab 2 Lab session 2.
Location: John von Neumann
12/11 8:15–10 Skim Goldberg Lecture 3: Applications in NLP.
Location: Ada Lovelace
14/11 13:15–17 Backup lab session.
Location: John von Neumann
21/11 13:15–15 Read GBC, chapter 9 Lecture 4: Convolutional Networks.
Location: Ada Lovelace
26/11 8:15–10 Lecture 5: Applications in Computer Vision.
Location: Ada Lovelace
28/11 13:15–17 Read instructions for lab 3 Lab session 3 (group 1).
Location: Olympen
2/12 08:15–12 Read instructions for lab 3 Lab session 3 (group 2).
Location: Olympen
3/12 8:15–10 Lecture 6: Adversarial Learning.
Location: Ada Lovelace
16/1 13:15–15 Seminar session S1.
Location: Allen Newell
16/1 15:15–17 Seminar session S2.
Location: Visionen (B-building)
17/1 10:15–12 Seminar session S3A.
Location: Allen Newell
17/1 10:15–12 Seminar session S3B.
Location: Visionen (B-building)