TDDC17 Artificial Intelligence
TDDC17-Lab1
Aim
The "agent" paradigm has rapidly become one of the major conceptual frameworks for understanding and structuring research activities within Artificial Intelligence. In addition, many software frameworks are being introduced and used both in research and industry to control the complexity of sophisticated distributed software systems where the nodes exhibit behaviors often associated with intelligent behavior.
The purpose with this lab is to introduce the agent concept by allowing you to familiarize yourself with a software agent environment and world simulator. You will program an agent and implement its behaviour to provide cleaning capabilities in the vacuum cleaning world. The behaviour will then be compared to some simple reference agents.
Preparation
Read chapter 2 in the course book. Pay particular attention to section 2.3 (agent environments) and section 2.4 (different agent types). After this, read this lab description in detail.
Lab assignment
Your assignment is to construct a vacuum agent. The agent should be able to remove all the dirt in a rectangular world of varying size with a random distribution of dirt. When the world is cleaned the agent should return to the start position and shut down.Lab system
The lab system consists of the environment simulator and the agent. You are given a simple example of a vacuum cleaner agent calledMyVacuumAgent.java
here. What you have to do is
to improve on the code controlling the behaviour of the agent to make it perform better.
The description of how to set up and run the code is available towards the bottom of the page.
The environment simulator
You can test your agent in the simulator GUI program. The simulator loads an agent (yours will be implemented in
MyVacuumAgent.class) and executes it in a generated environment. Agents
execute actions in the environment depending on what they can
observe (perceive). The actions influence the environment which in turn
generates new sensory impressions for the agents.
The simulator handles this interaction in a discrete manner; the world
advances in steps, where each step can be described
as follows:
- The agent perceives the world
- The agent decides on an action
- The action is executed
The implementation of the simulator
The simulator supplied for this lab is based on the AIMA-Java framework but it is not identical to the original version available on the web.The interface
Basically the agents may perceive dirt or obstacles in the squares, and they can act by moving between the squares, and removing dirt.
Percepts
A percept for the vacuum-agent consists of three values:bump
equal totrue
if the agent has bumped into an obstacle, elsefalse
.dirt
equal totrue
if the current square contains dirt, elsefalse
.home
equal totrue
if the agent is at the start position, elsefalse
.
Boolean
values are obtained with the following code located in the execute
method of an agent:DynamicPercept p = (DynamicPercept) percept;
Boolean bump = (Boolean)p.getAttribute("bump");
Boolean dirt = (Boolean)p.getAttribute("dirt");
Boolean home = (Boolean)p.getAttribute("home");
Note that this is the only data that the agent gets from the simulator.
Output
The output from the agent's program (return value) is one of the following actions:LIUVacuumEnvironment.ACTION_MOVE_FORWARD
moves the agent one step forward depending on its direction.LIUVacuumEnvironment.ACTION_TURN_LEFT
makes the agent turn left.LIUVacuumEnvironment.ACTION_TURN_RIGHT
makes the agent turn right.LIUVacuumEnvironment.ACTION_SUCK
cleans the dirt from the agent's current location.NoOpAction.NO_OP
informs the simulator that the agent has finished cleaning.
GUI
The provided environment simulator application is presented in Figure 1. Figure 1: Example of an environment in the vacuum agents' world. The main window of the application is divided into two parts: the environment visualization pane on the left and the text output pane which displays the redirected System.out
. Certain messages from the simulator are printed in this pane but it is also useful to debug your agent's code.
In the environment pane the black squares represent obstacles. Trying to move an agent into an obstacle results in perceiving the bump
equal to true
. The grey squares represent the dirt. Moving into such a square results in perceiving the dirt
equal to true
. The graphical representation of the agent visualizes its current direction.
The following drop down menus are available to set the simulation parameters (from left):
Environment size
can be set to5x5
,10x10
,15x15
,20x20
,5x10
, and10x5
. Your agent should not be tailored to any specific size, though for conveniance you can assume a maximum size of20x20
.Hinder probability
indirectly specifies the amount of obstacles in the environment (in this lab it should be set to 0).Dirt probability
indirectly specifies the amount of dirt in the environment.Random environment
specifies whether a new environment should be generated for every run. This function is useful to test an agent in the same environment several times.Type of agent
lists the available agent type classes. The first on the list is the agent you will modify.Delay
specifies the delay between each simulation step.
The following buttons are available (from left):
Clear
cleans the text output pane.Prepare
generates a random environment based on the selected parameters.Run
starts the simulation.Step
allows for performing one step of the simulation at a time. Make sure to executePrepare
first.Cancel
terminates the simulation.
Program
The interesting part of the agent is theexecute
method. It is a
function that takes a percept as input and returns an action as
output.
The simulator computes what the agent can perceive (a percept),
applies the function (your program) to the percept,
and executes the resulting action. Note (again) that the agent has no
authorization to change the world directly, for example via some
global reassignment.
As you can see from the definitions in the source code, the RandomVacuumAgent
and ReactiveVacuumAgent
(here) do not remember what they have done and experienced. If we want the agent to have a memory we have to
store information "inside" the agent.
Figure 2: Agent with internal state
Scoring
The performance of an agent is measured by a score. The initial score of the agent is -1000, and for each action the score is changed. Sucking up dirt increases the score 100 points, while shutting down at the starting square gives it 1000 points. All other actions decrease the score by one point.Running the Lab environment
You can choose either of the two following lab setups, depending on if you want to use a pre-packaged Eclipse environment or your own favorite editor. If you are unsure, go with Eclipse.
Eclipse version
- Make sure that you have Java 1.6 (
java -version
). If not, add it with the following command: module add prog/j2sdk/1.6
- Add the Eclipse module with the following command:
module initadd prog/eclipse
logout and login again
- Prepare the local files e.g.:
mkdir TDDC17; cd TDDC17
cp -r /home/TDDC17/www-pub/sw/lab_packages/lab1_workspace/ .
- Start the Eclipse editor:
eclipse
- When the eclipse editor asks for a path to the workspace choose the lab1_workspace folder which you copied in step 3.
- Right-click on the project and select refresh.
- The Eclipse editor will compile the java files automatically. You can start the lab by using the dropdown menu on the run button (green circle with a white triangle/play symbol). You may have to select "LIUVacuumAgent" in the drop-down menu.
Console version
- Make sure that you have Java 1.6 (
java -version
). If not, add it with the following command: module add prog/j2sdk/1.6
- Prepare the local files e.g.:
mkdir TDDC17; cd TDDC17
mkdir lab1; cd lab1
cp -r /home/TDDC17/www-pub/sw/lab_packages/lab1_workspace/project/* .
- To compile the required local classes:
./compile
-
Now you can run the simulation environment GUI:
./start
Running it on your personal computer
As the lab is self-contained you can just follow the directions above and copy over the directory to your personal computer. To make things easier we now also provide a .zip file with the workspace folder. Just extract that and point Eclipse to it during start-up. If you do not have Eclipse installed it is recommended that you download a version suitable to your platform here.Detailed Lab Task Description
Play around with the GUI and try the different options and agents.
- Using
MyVacuumAgent.java
as a template implement an agent with an internal state that helps the agent clean more effectively/efficiently than the reactive/random agents. Note that an example of an agent with a simple state representation is provided (very limited - it only moves forward and sucks dirt) that you may modify in any way you wish. The agent must be able to suck up all dirt in a rectangular world of unknown dimensions without obstacles and shut down on the starting position. You can use the already mentionedcompile
script to compile your changed agent (or use Run if you are using the Eclipse editor, files are compiled automatically when changed). Evaluate the performance of the agent. In what ways is it better or worse than the reactive agent? - Write explanations of how your agent works, what is the main decision-"loop", some notes about what data you store in your agent, and why you have chosen your solutions, if there were alternatives etc. This should add up to a report of approx. 1-2 A4s. Commented or easily readable code is also required.
How to report your results
Demonstrate the lab to the lab assistant and hand in the code as well as the report from part 2 according to the submission instructions.
Page responsible: Fredrik Heintz
Last updated: 2014-08-29