Hide menu

TDDB84 Design Patterns

Writing Papers

There are three possible main topic areas that you may choose from when writing your own papers. The papers will be used to set your grades in the course. For each main topic area, you are required to select a specific topic for yourselves that deal with specific patterns, application domains and quality attributes that these patterns may or may not affect.

We will approach this basically as you will approach writing a Master's thesis later on: by researching material (starting in seminar 3) and doing your own studies with a clear research question (lab 3, seminar 5) and then writing about your findings in a well-structured manner and submitting for peer review (seminar 6).

Your questions will all have the same format to help others understand what your paper is actually about, and to help you write a focused, well-written report. Two of the topics are advanced, marked with an (A). In the grading rubric for the papers, this is used to indicate how papers will be graded.

The format for all your questions will be

How does the implementation of X in Y affect Z as measured by W?

where Y is context dependent, but X will always be a design pattern, Z will be a software quality, and W an instrument or a metric to assess Z.

The topics are:

  • The relationship between software qualities and the use of design patterns and/or principles, in the game FreeCol used in the lab series or in another domain. You will need to read about definitions of different forms of Software Quality, and research papers on how design patterns have been studied with respect to their relationship to such qualities. This is a basic topic.

    Here, Y is a single or a couple of related applications in a certain domain, and Z is a software quality such as maintainability or defect density. A question may be How does the implementation of the Factory Method and Builder patterns in FreeCol affect the flexibility and extendibility of the application as measured by metric ___?

  • The relationship between programming paradigms and design patterns, e.g. comparing the Reactive Programming paradigm to the use of the Observer pattern, see Deprecating the Observer Pattern (This can also be found in the reference list under "Design patterns, paradigms and languages"), or a study of design patterns in dynamic languages: Implementing design patterns in a dynamic language such as Lisp, Ruby, or Javascript. This is an advanced topic.

    Here, Y is a programming language or paradigm qualitatively different from C++/Java and procedural object-oriented programming, and Z is a software quality such as understandability. A question may be How does the implementation of an Interpreter in Ruby affect the extendibility of the application as measured by metrics ___?

  • The relationship between application frameworks, metaprogramming and the implementation of design patterns: describing implementations of design patterns such as Proxy, Abstract Factory, and others in application frameworks such as PostSharp, Spring, or other frameworks of your choice. This is an advanced topic.

    Here, Y is an application framework. A question may be How does the implementation of an Abstract Factory compare to a Dependency Injection Framework in terms of the understandability of the application as measured by user evaluations and metrics ____?

For information on metrics used to relate high-level software quality attributes to object-oriented design, see J. Bansiya and C. Davis. A hierarchical model for object-oriented design quality assessment. IEEE Transactions on Software Engineering, 28(1):4–17, Jan 2002. For each of the topics above, it is necessary for you to read some research papers listed in the literature section, or else found using Google Scholar, IEEE XPlore or Unisearch @ Linköping University Library.

Grading rubric

See the following requirement document (gradic rubric) for some more detail in how your paper will be evaluated. These are the requirements that will be used in the peer review, as well as for determining your final grade in the course.


Your paper should be between 5 and 7 A4 pages in total, and you need to use the IEEE conference paper template for your papers.

Final submission

After the peer review during seminar 6, you will have additional time to revise your papers according to the feedback given (see the timetable). This time will allow you to improve your paper. Your final grades will be set by the course staff, but the better feedback you provide one another on your papers, the better your chances at receiving a high grade in the course.

Final submissions are sent via an Urkund e-mail address to olale55.liu@analys.urkund.se with subject line [TDDB84_HT2017].

You will need to add a short description of your improvements on your final papers along with your final submission. Those descriptions may be appended to your paper at the end. All papers submitted should be in PDF format with the file name LIUID_TDDB84_UPG1_HT2017.pdf.

Frequently Asked Questions

Example papers

There are some example papers:
  1. a report on comparing the design of a design pattern in Python and C++, using specific metrics.
  2. a report on the relationship between the Factory Method pattern and the reusability and understandability of software.
  3. How Does the Implementation of the Observer Pattern in Freecol Affect Reusability, Extendibility and Flexibility of the Application as measured by CBO?

Review of paper 1

Here is how the first of these two papers should be graded using the grading rubric above:

5 - The topic is advanced. The paper clearly describes how to implement the Proxy pattern in the Python and C++, and justifies why this would affect the flexibility as measured by four metrics (DAM, MOA, NOP, and DCC).
Information quality/evidence
4 - Claims are justified by the correct use of relevant and reliable sources. Peer-reviewed scientific publications are few. There is room for better sources on Flexibility.
4 - The report has a mostly logical development of ideas, the hypothesis is presented adequately and there are supporting subsections with counter-arguments to some extent.
Language and form
4 - The paper is fairly easy to follow and the text is for the most part without errors related to spelling, grammar, and consistent use of references and formatting. Some terms are introduced without explanation (e.g. DAM, MOA, NOP, and DCC).
5 - The evaluation is clearly related to the main subject, and is of general interest. The analysis combines a thorough literature overview with empirical comparison of two implementations of Proxy design pattern (in Python and C++).

Review of paper 2

Here is how the second paper should be graded using the grading rubric above:
4. It is hard to really fit the submitted paper in the given format required. It would have been useful to include a motivation to the given method of assessing effects of a design pattern, as well as why the specific qualities are measured by the chosen metrics, but the author justifies why the design pattern should have these effects in general, and section II mentions a particular study that has reached interesting (contradictory) results on the effects on Factory Method. Alluding to such results earlier on would have helped the introduction.
Information quality/evidence
4.The author uses relevant references well, and uses specific results from the studies even if there are some errors (e.g. Section III.E har a reference incorrectly used, and the abbreviation "IEEE" is incorrectly written with lowercase letters as "ieee" in the list of references). The list of references is somewhat short, but the ones chosen are very relevant.It would have been useful to include papers on recognizing design patterns in code automatically and formalizing them, as the paper specifically touches upon the general features found in any code that implements the Factory Method pattern. Such studies are mentioned in the survey paper quoted, and it would have been interesting to refer to.
5. Very well structured and presented overall, with a balanced presentation of the source material and example. Pages 4-5 contains formulas for Understandability (u) and Reusability (r) that are a bit hard to read when squeezed together with tables in a two-column layout. In a way you present the same information in both tables and text, so I would have settled for the simplified, final expressions of when u and r decrease and increase respectively.
Language and form
(5). In general, the text is very well written and reads like a scientific publication. The errors I have spotted are few, but for example: "This paper describes how Bansiya and Davis’s well cited [2002 paper on a] software quality suit[e] contradicts previous results regarding the effects the factory method design pattern has on understandability."

Bansiya and Davis do not introduce a "quality suite" but rather a model that describes how to weigh different design-level code metrics to obtain values that are assumed to measure software quality attributes.

5. It is a thorough and well conducted analysis of interesting differences between aggregations of code metrics, and how they relate to features of the Factory Method design pattern. It is generally interesting as a method of understanding the relationship between general properties of design patterns and how to measure their effects, and refers to appropriate previous studies as well. However, the analysis could have been improved by making better use of reference 4.

Review of paper 3

Here is how the third paper should be graded using the grading rubric above:

4. The suitability of the design pattern for the given problem (not given in the introduction, but explained later) is not clear, but the relevance of the concrete instrument (CBO) to the software qualities in question is well presented, as well as the relevance of the instrument with respect to measuring the effect of the chosen design pattern. The topic is not advanced.
Information quality/evidence
5. The author uses multiple, high quality sources, that support both supporting arguments and counter-arguments in her text. The argumentation is very clear, and demonstrates an ability to use relevant literature with precision and with a critical eye.
4. The organization is clear, but the introduction of the core question comes a little late and could be moved towards the introduction. The explanations of type "this section will cover ..." could be integrated more naturally in the text, so as to not add meta-material to the text. The development of ideas could be made clearer that way. Answering the questions "why studying the Observer design pattern? Why using CBO?" should come very early to help the reader maintain interest. Some parts of the text are not directly justified by previous sections (in section II.B: "A method called QMOOD presented by Bansiya et. al. [ ... ]") and could be integrated in the text.
Language and form
(5). The text contains only a few language errors (save for "Coupling serves as a definition for all types of interactions between objects but in [12] but" for instance), but could explain the fact that capital "X" and lower-case "x" are both meant to denote classes in the formula in section II.B.1. Other than that, the text is similar in style to that of a research paper, and terms are sufficiently explained.
5. The analysis clearly describes how the metric used (CBO) indicates the quality studied, and the claims made are supposed by high-quality scientific references. The overall analysis demonstrates a mature approach to empirical analysis, and an ability to provide a balanced analysis using both empirical data and a thorough review of literature.

Page responsible: Ola Leifler
Last updated: 2017-08-30