Hide menu

TDDC78 mid-term evaluation 2016

The course has been mid-term-evaluated by the muddy card method in the middle of the third week on wednesday 20/4/2016 during the 9th lecture, 2 days after the first lab session.
30 students attended the lecture, and I received 23 cards.

Summary

By and large, the course seems to run very well.
Below I list some concrete issues and comment where appropriate.

Subject, general issues, NSC resources

The subject is generally perceived as interesting.
The excursion to NSC was appreciated (except for one card).
Students are having fun to use a real supercomputer.

Schedule

Several cards remarked that the labs start (too) late / after too many lectures in the beginning (while one card appreciates just that).
Comment: Lectures need to go through architectural concepts and the programming models before doing programming labs. Possibly the Foster parallel program design method could be moved to a later lecture in next year's course to get started earlier with MPI and OpenMP, although it should logically come before the programming part.

Information, Handouts, Literature

The new lab discussion forum is appreciated by several cards.

Lectures

Lectures and slide material are well appreciated by most students.
A few remarked however on a high pace of lecturing (while others state that it is just the right speed), too many slides, fast speaking, and low interactivity.

Comment: You can help to reduce speed and increase interactivity by asking questions. It is absolutely OK to ask me to explain something again if it was not clear or too fast. If nobody asks, I assume that (almost) everybody could follow.

The live programming examples for OpenMP were appreciated; the same should ideally also be done for MPI.

One card suggested that the Hello-world example code be put on the course homepage, this has now been arranged (see hello_mpi.c for MPI and hello_openmp.c for OpenMP).
Another suggestion was that students should look at the code examples on their own to save lecturing time.

One card complained about the Fibonacci example (for OpenMP tasks) being unrealistic.
Comment: I should have said that more clearly - Fibonacci, although easy to explain and described with little code, is not a typical (HPC) application but rather a stress benchmark for a task-based runtime system, because the tasks are so many and so extremely light-weight (just one addition each) that the measured time reflects, almost exclusively, just the overhead of the dynamic task creation and scheduling.

Labs

Labs are considered good, useful, interesting and fun by most students, with an appropriate level of difficulty.
Labs are perceived to help with the understanding of the presented theory of OpenMP and MPI.
A few expressed concerns that they might be more time consuming than expected.
One card remarked that the given lab code skeletons are bad.
Several cards remarked that the lab compendium is not detailed enough, for instance about how to compile on Triolith etc., while another one explicitly wrote that it is OK.

Comment: We try to keep the lab documentation updated but some issues might have been overlooked. We welcome concrete suggestions for improvement or bug fixes, please contact the course assistant directly.


Thanks for all comments!

Christoph Kessler, course leader TDDC78

Page responsible: Webmaster
Last updated: 2016-05-02