Göm menyn

729G78 Artificiell intelligens

Laboration 1: Intelligenta agenter

Purpose

The purpose of this lab is to get a concrete idea of what an agent is, and how intelligence can be viewed from an AI perspective. You will be required to familiarize yourself with the terminology used in the literature by discussing the task and your specific solution.

Description

In this assignment, your task is to program a Pacman agent that can clear all the food in a room of arbitrary size. All coding is done in Python 3 using Jupyter Notebook. Read the instructions carefully before starting.

The actions that can be performed by a Pacman agent are:

Action command    Effect
GoForward    Take one step forward
GoRight    Turn 90 degrees right and take one step forward
GoLeft    Turn 90 degrees left and take one step forward
GoBack    Turn and 180 degrees and take one step forward
Stop    Shut down the agent
Any other command    No effect

Your pacman agent cannot see what is ahead of it; it can only see what is on its current location. A location can either contain food, a wall or nothing. The agent receives percepts about its environment in the form of tuples like: ('clear', 'bump'). The first element says whether the location contains food or not, and the second whether the agent's previous action resulted in a collision. In this lab, your pacman will automatically eat the food upon arrival, and the first element will therefore always appear to be 'clear'. The second element has the value 'bump' if and only if the previous action resulted in a collision with a wall, and will otherwise have the value None. Upon collision, the agent will have aborted its action, thus neither turning nor moving.

There are two example agents in the lab code. RandomAgent is the simplest of agents as it will always return a random action. ReflexAgentWithState is slightly more advanced since it can remember its previous action through a simple state.

The main task is to devise a strategy that will allow the agent to eat all the food in the room and to stop when all the food has been consumed. You will be implementing this strategy in the class AgentWithState. To solve the exercise, you will have to implement (at least) its three core methods:

  • update_state_with_percept The agent updates its state based on what it sees (the percept)
  • choose_action The agent returns the action that it wants to perform
  • update_state_with_action The agent updates the state based on the action it performed

These three methods run in order for every loop in the main program. Note that in choose_action, the agent chooses an action to perform (possibly based on its internal state) but the agent cannot update its internal state within this function. Keep in mind that the agent may not always succeed in performing an action, since we only receive a new percept when the update_state_with_percept runs again.

You'll have to decide what the agent needs to "remember" to complete its task by modifying the agents nested class State. For example, consider the following:

  • Does it need to keep track of its direction?
  • Does it need to remember some representation of the entire environment?
  • Does it need to remember its previous action or actions?
  • Does it need to keep track of steps/turns/bumps?

You can assume that the agent's start position is the lower left corner, facing east.

Getting started

  1. Create a directory for this lab and navigate into it using the terminal
  2. Copy the lab files from the course directory:
    $ cp -r /courses/729G78/Lab1/* .
  3. Rename Lab1_LiU-ID-1_LiU-ID-2.ipynb to match your group's LiU IDs
  4. Rename and open the exercises document Lab1Exercises_LiU-ID-1_LiU-ID-2.odt to match your group's LiU-IDs.
  5. Activate the lab environment. Note: The environment needs to be activated every time a new terminal is opened.
    $ source /courses/729G78/labs/environment/bin/activate
  6. Run the notebook:
    $ jupyter notebook Lab1_LiU-ID-1_LiU-ID-2.ipynb

Working on your own computer

  • Unix
    1. Create environment
      $ python3 -m venv /path/to/my/lab/environment
    2. Activate the environment
      $ source /path/to/my/lab/environment/bin/activate
    3. Confirm that the environment is active (should point to your environment)
      $ which python
    4. Install libs using pip
      $ pip install notebook pygame
  • Windows Power Shell
    1. Create environment
      $ python -m venv /path/to/my/lab/environment
    2. Activate the environment
      $ /path/to/my/lab/environment/Scripts/Activate.ps1
      On Windows it may be neccessary to fix a problem with execution policy settings:
      $ Set-ExecutionPolicy Unrestricted -Scope Process
    3. Confirm that the environment is active (should point to your environment)
      $ pip -V
    4. Install libs using pip
      $ pip install notebook pygame
  • Download and unzip the lab files: Lab1.zip
  • Navigate into the unzipped directory and run the notebook using:
    $ python3 -m notebook Lab1_LiU-ID-1_LiU-ID-2.ipynb
    (Windows Power Shell)
    $ python -m notebook Lab1_LiU-ID-1_LiU-ID-2.ipynb
  • Note: The environment will need to be activated again every time you close the terminal.

Hand-in

  1. Complete the exercises listed in the Jupyter Notebook. The implemented agent should work in a world of arbitrary size. It's a good idea to test all the levels in the layout directory.
  2. The discussion in the exercises must be relevant, relate to the course literature, and be sufficiently motivated.
  3. Before handing in: Check that your code is working for a few different layouts. Try to describe to your lab partner what, why, and how your agent keeps track of different aspects in the environment.
  4. Upload your modified notebook (the .ipynb-file) and the exercises document (as PDF) in Lisam.


Sidansvarig: Robin Keskisärkkä