TDDC17 Artificial Intelligence
TDDC17-Lab1
Aim
The "agent" paradigm has rapidly become one of the major conceptual frameworks for understanding and structuring research activities within Artificial Intelligence. In addition, many implementational frameworks are being introduced and used both in research and industry to control the complexity of sophisticated distributed software systems where the nodes exhibit behviors often associated with intelligent behavior.
The purpose with this lab is to introduce the agent concept and allow you to implement a search functionality which can then be evaluated relative to a vacuum cleaning agent application.
The lab consists of two parts. In the first you will familiarize yourselves with the software agent environment and world simulator. In the second part you will implement a search algorithm which can be used by the agent to provide cleaning capabilities in the vacuum cleaning world.
Preparation
Read chapter 2 in the course book. Pay particular attention to section 2.3 (agent environments) and section 2.4 (different agent types). After this, read this lab description in detail.
You may also want to consult a book about Lisp such as Programmering i Lisp by Anders Haraldsson or some online Lisp manual like the ANSI and GNU Common Lisp Document
Lab assignment
Your assignment is to construct a vacuum agent. The agent should be able to remove all the dirt in a rectangular world of unknown size with a random distribution of dirt and obstacles. When the world is cleaned the agent should return to the start position and shut down.Lab system
The lab system consists of code handling the environment and the agent's interaction with it. What you have to do is to write the code for your own agent. The code you'll need is available in the folder http://www.ida.liu.se/~TDDC17/sw/aima/agents.The environment simulator
You can test your agent in the environment simulator. The simulator consists of a world and agents. Agents execute actions in the environment depending on what they can observe. The actions influence the environment which in turn generates new sensory impressions for the agents. The simulator handles this interaction in a discrete manner; the world advances in steps, where each step can be described as follows:- The agents perceive the world
- The agents decide on their actions
- The actions are executed
The implementation of the simulator
This lab only concerns the agents, and therefore you do not need to concern yourselves with the implementation details of the simulator, or the detailed representation of the environment.
The main part of the code for the simulator is in the file:
http://www.ida.liu.se/~TDDC17/sw/aima/agents/basic-env.lisp.
The file http://www.ida.liu.se/~TDDC17/sw/aima/agents/grid-env.lisp provides definitions for a two dimensional environment consisting of squares. The agents start in square (1 1) with the forward direction towards the east.
The interface
Basically the agents may perceive dirt or obstacles in the squares, and they can act by moving between the squares, and removing dirt.
Percepts
A percept for the vacuum-agent is a list of three elements:bump
:t
if the agent has bumped into an obstacle, elsenil
.dirt
:t
if the current square contains dirt, elsenil
.home
:t
if the agent is at the start position, elsenil
.
Output
The output from the agent's program is a symbol, or a small list, that represents an action. For example, the output could beforward
which means
that the agent will try to advance one square in its direction,
or suck
which means that the agent removes the dirt
from its current square, or (turn right)
which means
that the agent turns 90 degrees to the right.
(The simulator executes the actions by calling the functions with
the same name.)
!---!---!---!---!---!---! 5 ! # ! # ! # ! # ! # ! # ! !---!---!---!---!---!---! 4 ! # ! ! ! ! ! # ! !---!---!---!---!---!---! 3 ! # ! * ! # ! ! ! # ! !---!---!---!---!---!---! 2 ! # ! * ! ! # ! ! # ! !---!---!---!---!---!---! 1 ! # ! A ! # ! * ! ! # ! !---!---!---!---!---!---! 0 ! # ! # ! # ! # ! # ! # ! !---!---!---!---!---!---! 0 1 2 3 4 5Figure 1: Example of an environment in the vacuum-agents world, # represents obstacles, * is dirt and A is the name of the agent.
Agents
At the end of the file http://www.ida.liu.se/~TDDC17/sw/aima/agents/vacuum.lisp two agents are defined,random-vacuum-agent
and
reactive-vacuum-agent
.
Random-vacuum-agent ignores the percepts it gets and chooses actions
completely at random.
The reactive agent uses the information from the percepts, removes
the dirt and turns when it has bumped into an obstacle.
Generally the agent goes straight forward, but sometimes it turns.
Since it has no memory it just aimlessly wanders around the world
and may visit the same square many times. These agents do not
behave in a particularly intelligent manner.
Your agent ought to do a lot better.
Datatypes
Agents have their own datatype,agent
, and are created
with the constructor
make-agent
that has the following arguments of interest:
(make-agent &key name program)where
name
is the agent's name (a symbol) and
program
is a function coding the agent's behavior.When you create your own agent, do not use this constructor directly. Look at the end of the file http://www.ida.liu.se/~TDDC17/sw/aima/agents/vacuum.lisp where the reactive and random vacuum agents are defined and reuse that code.
Program
The interesting part of the agent is its program. The program is a function that takes a percept as input and returns an action as output. The simulator computes what the agent can perceive (a percept), applies the function (your program) to the percept, and executes the resulting action. Note (again) that the agent has no authorization to change the world directly, for example via some global reassignment.As you can see from the definitions in the source code, the random- and reactive-vacuum-agents do not remember what they have done and experienced. If we want the agent to have a memory we have to store information "inside" the agent.
Figure 2: Agent with internal state
An example
When an agent is started in a world (for example withrun-vacuum
), the following happens.
The agent and the world are created and get their initial
values. The agent is situated at square (1 1) and is turned
towards the east. Dirt and obstacles are randomly distributed in
the world. The simulator gives the agent a percept,
for example (nil nil t) which means that there is no dirt in the
current square, that the agent has not just bumped into an obstacle
and that it is situated at the start position.
The agent's program takes the percept as an argument and updates
its internal state if it has one and it returns an action,
for example forward
.
The simulator executes the action in the world so that the world's
state is changed.
The initial score of the agent is -1000, and for each action the
score is changed.
Sucking up dirt increases the score 100 points, while shutting
down at the starting square gives it 1000 points.
All other actions decrease the score one point.
Running the agent in the simulator
To run the simulator with your new agent you write(run-vacuum :agent (my-agent 'A))where
my-agent
is your function.
If you want to see the simulation, just add the argument
:display t
.
You can vary the size of the world, the probability of dirt and the
probability of the obstacles in the world with the arguments
x, y, dirtprobability
and hinderprobability
.
To run your agent in a 8 x 8 world with 25% chance of dirt and some
obstacles you can use
(run-vacuum :agent (my-agent 'Bo) :display t :x 8 :y 8 :dirtprobability 0.25 :max-steps 200 :hinderprobability 0.25)The
max-steps
parameter specifies how many actions the agent
is allowed to make in the environment.
If you want to compare your agent with the other agents, just use the
function vacuum-trials
. It runs the agents in several
worlds and returns the average results. The function is called in
the following manner:
(vacuum-trials :agent-types '(reactive-vacuum-agent random-vacuum-agent my-agent))
Detailed Lab Task Description
Note! The basic rule is to avoid copying files from the Aima library. Write your definitions in a separate file that can be loaded when you run the aima system.
- If you don't have Allegro installed, you need to add it with the command:
module add prog/allegro
- Start Allegro from the background menu or from a shell with:
emacs-allegro-cl
Then write the following commands to load the aima system and the files needed to run the agents (note the initial colon)::ld ~TDDC17/www-pub/sw/aima/aima
(aima-load 'agents)
If you want you can test-run one of the implemented agents, reactive-vacuum-agent, by writing:(run-vacuum :display t)
If you want a graphical interface to your vacuum agent, execute the following command:(load "/home/TDDC17/www-pub/sw/aima/agentgui/agent-gui.cl")
- Implement an agent with an internal state that helps the agent
clean more effectively/efficiently than the reactive/random agents.
The agent must be able to suck up all dirt in a rectangular
world of unknown dimensions (without obstacles) and shut down on the
starting position.
Evaluate the performance of the agent. In what ways is it
better or worse than the reactive agent?
In order to implement a internal state you need a variable that is local to the agent (using a global variable is not a valid solution). The agent must update its own state which can be implemented by a so called Lexical Closure. A lexical closure is basically a function together with its lexical environment (e.g variables).(let ((v 1)) #'(lambda () (...)))
is a simple example of how one can create a lexical closure that keeps the value of the internal variablev
.
(If you want to you can skip to step 4, and implement one agent that can handle the requirements in both step 3 and step 4.) - Create an agent capable of cleaning a world with random
obstacles. (You should choose a
hinderprobability
between 0.1 and 0.25.) You can also use the functionvacuum-hinder-trials
to see how your agent compares to for example the reactive agent. This function is similar tovacuum-trials
but runs the agents in a world with obstacles. Your agent should be able to suck up all the reachable dirt and return to the starting position. You should implement a search algorithm (online, offline or both) as a part of your agent.) - Write explanations to how your agent works, what is the main decision-"loop", some notes about what data you store in your agent, and why you have chosen your solutions, if there were alternatives etc. This should add up to a report of appr. 1-2 A4s. Commented or easily readable code is also required.
How to report your results
Demonstrate part 4 to the lab assistant and hand in the code for parts 3 and 4, as well as the report from part 5.FAQ
How do I show the previously entered command in Emacs Allegro?
Press CTRL-C + CTRL-P at the prompt. It works like pressing the up-arrow in a UNIX shell terminal.How do I remove all the printouts that are displayed after several errors?
Enter :res at the prompt.How can I create an array that can grow and shrink?
Use the adjust-array method that is described in the Lisp book.
Page responsible: Fredrik Heintz
Last updated: 2014-08-29