Hide menu

729A27 Natural Language Processing


This page contains the instructions for the lab assignments, and specifies the central concepts and procedures that you are supposed to master after each assignment. For more information about how these contents are examined, see the page on Examination.

General information

Lab assignments should be done in pairs. Please contact the examiner in case you want to work on your own. Unfortunately, we do not generally have the resources necessary to tutor and give feedback on one-person labs.

Instructions: Submit your labs according to the instructions below. Please also read the general rules for hand-in assignments. Before you submit your first lab, you and your lab partner need to sign up in Webreg.

Format of the subject line: 729A27-2018 lab code your LiU-ID your partner’s LiU-ID your lab assistant’s LiU-ID

Example: 729A27-2018 L1 marjo123 erika456 fooba99

Lab assistants for this course:

  • Marco Kuhlmann: marku61
  • Robin Kurtz: robku08
  • Alice Reinaudo: alire41

Feedback: For each lab there are a number of scheduled hours where you can get oral feedback on your work from the lab assistants. If you submit in time for the first due date, you will get also get written feedback. In addition to that, you can always get feedback from the examiner (office hours: Mondays, Wednesdays, and Thursdays 13-15 in Building E, Room 3G.476).

Information about notebooks

This course uses Jupyter notebooks for some of the lab assignments. Notebooks let you write and execute Python code in a web browser, and they make it very easy to mix code and text.

Lab environment. To work on a notebook, you need to be logged into one of IDA’s computers, either on-site or via ThinLinc. At the start of each lab session, you have to activate the course’s lab environment by writing the following at the terminal prompt:

source /home/729A27/labs/environment/bin/activate

Download and open the notebook. To start a new notebook, say L1.ipynb, download the notebook file to your computer and issue the following command at the terminal prompt.

jupyter notebook L1.ipynb

This will show the notebook in your web browser.

Rename the notebook. One of the first things that you should do with a notebook is to rename it, such that we can link the file to your LiU-IDs. Click on the notebook name (next to the Jupyter logo at the top of the browser page) and add your LiU-IDs, like so:

L1-marjo123-erika456

How to work with a notebook. Each notebook consists of a number of so-called cells, which may contain code or text. During the lab you write your own code or text into the cells according to the instructions. When you ‘run’ a code cell (by pressing Shift+Enter), you execute the code in that cell. The output of the code will be shown immediately below the cell.

Check the notebook and submit it. When you are done with a notebook, you should click on Kernel > Restart & Run All to run the code in the notebook and verify that everything works as expected and there are no errors. After this check you can save the notebook and submit it according to the instructions below.

Topic 0: Text segmentation

Text segmentation is the task of segmenting a text into linguistically meaningful units, such as paragraphs, sentences, or words.

Level A

When the target units of text segmentation are words or word-like units, the process is called tokenisation. In this lab you will implement a simple tokeniser for text extracted from Wikipedia articles. The lab also gives you a chance to acquaint yourself with the general framework that we will be using for the remainder of the lab series.

Lab L0: Text segmentation (due 2018-01-19)

Contents

After this lab you should be able to explain and apply the following concepts:

  • tokenisation
  • undersegmentation, oversegmentation
  • precision, recall

After this lab you should be able to perform the following procedures:

  • segment text into tokens (words and word-like units) using regular expressions
  • compare an automatic tokenisation with a gold standard

Topic 1: Text classification

Text classification is the task of categorising text documents into predefined classes.

Level A

In this lab you will implement two simple text classifiers: the Naive Bayes classifier and the averaged perceptron classifier. You will evaluate these classifiers using accuracy, and experiment with different document representations. The concrete task that you will be working with is to classify movie reviews as either positive or negative.

Lab L1: Text classification (due 2018-01-26)

Contents

After this lab you should be able to explain and apply the following concepts:

  • accuracy
  • Naive Bayes classifier
  • averaged perceptron classifier

After this lab you should be able to perform the following procedures:

  • evaluate a text classifier based on accuracy
  • learn a Naive Bayes classifier from data
  • implement an averaged perceptron classifier

Level B

In this lab you will implement the missing parts of a third text classifier, the maximum entropy classifier. Your main focus will be on the conversion of the data into the matrix format that is required by standard gradient search optimisers. You will be evaluating your implemented model on the same task as for the Level A-lab.

Lab L1X: Maximum entropy classification (due 2018-01-26)

Contents

After this lab you should be able to explain and apply the following concepts:

  • accuracy
  • maximum entropy classifier (advanced)

After this lab you should be able to perform the following procedures:

  • evaluate a text classifier based on accuracy
  • implement the core parts of a maximum entropy classifier (advanced)

Topic 2: Language modelling

Language modelling is about building models of what words are more or less likely to occur in some language.

Level A

In this lab you will experiment with n-gram models. You will test various various parameters that influence these model’s quality and estimate models using maximum likelihood estimation with additive smoothing. The data set that you will be working on is the set of Arthur Conan Doyle’s novels about Sherlock Holmes.

Lab L2: Language modelling (due 2018-02-02)

Contents

After this lab you should be able to explain and apply the following concepts:

  • n-gram model
  • entropy
  • additive smoothing

After this lab you should be able to perform the following procedures:

  • estimate n-gram probabilities using the Maximum Likelihood method
  • estimate n-gram probabilities using additive smoothing

Level C

Most younger users type Chinese using a standard QWERTY keyboard to type the pronunciation of each character in Pinyin. The reason this became possible is the advent of language models that automatically guess what the right character is. In this assignment, you will write a program that can read in pronunciations (simulating what a user might type) and predicts what the correct Chinese characters are.

Lab L2X: Chinese character prediction (due 2018-02-02)

Contents

After this lab you should be able to explain and apply the following concepts:

  • n-gram model
  • Witten–Bell smoothing

After this lab you should be able to perform the following procedures:

  • implement an n-gram model with Witten–Bell smoothing
  • implement the core parts of an autocompletion algorithm
  • evaluate a language model in the context of an autocompletion application

Topic 3: Part-of-speech tagging

Part-of-speech tagging is the task of labelling words (tokens) with parts of speech such as noun, adjective, and verb.

Level A

In this lab you will implement a part-of-speech tagger based on the averaged perceptron and evaluate it on the Stockholm Umeå Corpus (SUC), a Swedish corpus containing more than 74,000 sentences (1.1 million tokens), which were manually annotated with, among others, parts of speech.

Lab L3: Part-of-speech tagging (due 2018-02-09)

Contents

After this lab you should be able to explain and apply the following concepts:

  • sequence labelling
  • averaged perceptron classifier

After this lab you should be able to perform the following procedures:

  • implement a part-of-speech tagger based on the averaged perceptron
  • evaluate a part-of-speech tagger based on accuracy

Level B

In the advanced part of this lab, you will practice your skills in feature engineering, the task of identifying useful features for a machine learning system – in this case the part-of-speech tagger that you implemented in the Level A-lab.

Lab L3X: Feature engineering for part-of-speech tagging (due 2018-02-09) (same file as for the Level A-lab)

Contents

After this lab you should be able to explain and apply the following concepts:

  • averaged perceptron classifier
  • feature engineering (advanced)

After this lab you should be able to perform the following procedures:

  • improve a part-of-speech tagger using feature engineering (advanced)
  • evaluate a part-of-speech tagger based on accuracy

Topic 4: Syntactic analysis

Syntactic analysis, also called syntactic parsing, is the task of mapping a sentence to a formal representation of its syntactic structure.

Level A

In this lab you will implement a simple transition-based dependency parser based on the averaged perceptron and evaluate it on the English Web Treebank from the Universal Dependencies Project.

Lab L4: Syntactic analysis (due 2018-02-16)

Contents

After this lab you should be able to explain and apply the following concepts:

  • averaged perceptron classifier
  • transition-based dependency parsing
  • accuracy, attachment score

After this lab you should be able to perform the following procedures:

  • implement a transition-based dependency parser based on the averaged perceptron
  • evaluate a dependency parser based on unlabelled attachment score

Level C

In this lab you will implement the Eisner algorithm for projective dependency parsing and use it to transform (possibly non-projective) dependency trees to projective trees. This transformation is necessary to be able to apply algorithms for projective dependency parsing to treebanks that may contain non-projective trees.

Lab L4X: Projectivisation (due 2018-02-16)

Contents

After this lab you should be able to explain and apply the following concepts:

  • Eisner algorithm
  • projectivisation, lifting (advanced)

After this lab you should be able to perform the following procedures:

  • implement the Eisner algorithm (advanced)

Topic 5: Semantic analysis

These labs focus on word space models and semantic similarity.

Level A

In this lab you will explore a word space model which trained on the Swedish Wikipedia using Google’s word2vec tool. You will learn how to use the model to measure the semantic similarity between words and apply it to solve a simple word analogy task.

Lab L5: Semantic analysis (due 2018-02-23)

Contents

After this lab you should be able to explain and apply the following concepts:

  • word space model, cosine distance, semantic similarity
  • accuracy

After this lab you should be able to perform the following procedures:

  • use a pre-trained word space model to measure the semantic similarity between two words
  • use a pre-trained word space model to solve word analogy tasks

Level B

In this lab you will use state-of-the-art NLP libraries to train word space models on text data and evaluate them on a standard task, the synonym part of the Test of English as a Foreign Language (TOEFL).

Lab L5X: Semantic analysis (due 2018-02-23)

After this lab you should be able to explain and apply the following concepts:

  • word space model, cosine distance, semantic similarity
  • accuracy

After this lab you should be able to perform the following procedures:

  • train a word space model on text data (advanced)
  • use a word space model to solve a synonym prediction task (advanced)

Reflection paper

After having completed all lab assignments, you are asked to write an individual reflection paper. The purpose of this assignment is to give you an opportunity to think about what you have learned from the labs. The paper should have three components:

  • your description of your work with the labs, with a focus on those aspects that you consider most important
  • your analysis of your experience based on concepts from the course
  • your conclusions regarding the question what you take away from this part of the course

For more detailed information, see the guide on Reflection papers.

Instructions: Write a paper according to the given specifications. The length of your paper should be around 1,000 words (approximately 2 pages). Submit your paper as a PDF document.

Due date: 2018-03-17

Format of the subject line: 729A27-2018 LR your LiU-ID marku61

Example: 729A27-2018 LR marjo123 marku61

Feedback: You will get written feedback on your paper from the examiner.


Page responsible: Marco Kuhlmann
Last updated: 2017-12-14