Hide menu

TDDC17 Artificial Intelligence

TDDC17-Lab2


Aim

The aim of this lab is for you to understand how different search strategies work, and to gain experience with a domain-specific application of search algorithms. You will explore the strengths and weaknesses of different search strategies and acquire deeper hands-on knowledge to complement your knowledge about search theory.

Preparation

Read chapters 3 and 4 in the course book. All parts of ch. 3 and 4 are not required for completing the lab, but it is recommended that you do this in any case.

About the lab

Your task in this lab is to study and compare different search algorithms. You will also study what happens when the different search algorithms are applied in a specific domain. You will run different search algorithms in the vacuum domain. The results you obtain will be the basis for an analysis of the applicability of the different algorithms in this domain. The analysis should include the time and space complexity of the algorithms, and whether they are optimal or complete. To help you in the analysis, there are a number of questions for you to answer. These lab exercises are more concerned with the analysis of algorithms and reasoning about their applicability rather than implementation.

Lab system

The lab system consists of the environment simulator which executes several search algorithms. The difference between the previous lab is that now the agent has complete knowledge about the world.

Given a problem description, a vacuum search agent searches a sequence of actions that describe the movements that have to be made for it to clean all dirty squares and then return to the start position and shut itself down. A prerequisite for solution to this problem is that the vacuum agent has complete knowledge about the world, in particular about where the dirt is located. This implies that when the search is performed, no percepts are needed to act since complete information about the world is assumed and no dynamic changes occur other than those caused by the agent.

GUI
The provided environment simulator application presented in Figure 1 is very similar to the one from the previous lab.

Home

Figure 1: Example of an environment in the vacuum agents' world.

The main window of the application is divided into two parts: the environment visualization pane on the left and the text output pane which displays messages from a particular search algorithm.
An example output for the A* algorithm looks like this:

<simulation-log>
Running A* with H2 heuristic
Action[name==TurnRight]
Action[name==MoveForward]
Action[name==MoveForward]
Action[name==TurnLeft]
Action[name==MoveForward]
Action[name==Suck]
Action[name==MoveForward]
Action[name==TurnLeft]
Action[name==MoveForward]
Action[name==TurnLeft]
Action[name==Suck]
Action[name==MoveForward]
Action[name==MoveForward]
Action[name==TurnRight]
Action[name==MoveForward]
Action[name==NoOp]
pathCost : 16.0
nodesExpanded : 5831
queueSize : 12999
maxQueueSize : 13000
Search time = 4279 ms
The list contains the sequence of actions found with the algorithm, the path cost, the number of expanded nodes, the queue size (see "open list" section 3.5 of the course book), and the execution time.

The following drop down menus are available to set the parameters of the world and search algorithms (from left):

  1. Environment size.
  2. Hinder probability indirectly specifies the amount of obstacles in the environment.
  3. Dirt probability indirectly specifies the amount of dirt in the environment.
  4. Random environment specifies whether a new environment should be generated for every run. This function is useful to test an agent in the same environment several times.
  5. Type of search algorithm lists the available algorithms.
  6. Depth limit for Depth Limited- and Iterative Deepening Search.
  7. Heuristic function for the applicable algorithms:
    • H1: (number of dirt squares left)*2
    • H2: (number of dirt squares left)*2+((number of dirt squares left)+1)*(manhattan distance to the closest dirt or home if no dirt is left)
  8. Node expansion limit specifies the number of nodes to expand before giving up the search.

The following buttons are available (from left):

  1. Clear cleans the text output pane.
  2. Prepare generates a random environment based on the selected parameters.
  3. Run starts the selected search algorithm.
  4. Step allows for stepping through the found list of actions.
  5. Cancel terminates the search algorithm.

Task

  1. Make sure that you have Java 1.6 (java -version). If not, add it with the following command:
    • module add prog/j2sdk/1.6.0_11
  2. Copy the GUI execution script:
    • cp /home/TDDC17/www-pub/sw/aima-java/start_search .
  3. Start the GUI:
    • ./start_search
  4. Test the available search algorithms with different parameters on different world sizes.

Questions and Tasks

In your report, you will have to answer all the questions and perform all the tasks in the following list:


1. In the vacuum cleaner domain, what are the states and actions? What is the branching factor?
2. What is the difference between Breadth First Search and Uniform Cost Search in a domain where the cost of each action is 1? Run the two algorithms and compare them. Explain your results.
3. If one would use A* to search for a path to one specific square in the vacuum domain, what could be the heuristic (h)? The cost function (g)? Is it an admissible heuristic?
4. What is the difference between using duplicate-checking in breadth-first and in depth-first search? Based on your experimental evaluation of the two algorithms is duplicate checking implemented in the provided algorithm?
5. Depending on the implementation when a particular algorithm chooses a next applicable action to perform it can do it in a particular order (always the same) or randomly. Which search algorithms can be affected by this choice and how?
6. Compare A* with Greedy Search empirically and explain the result for both H1 and H2 heuristics.
7. Draw and explain. Choose your three favorite search algorithms and apply them to any problem domain (it might be a good idea to use a domain where you can identify a good heuristic function). Draw the search tree for them, and explain how they proceed in the searching. Also include the memory usage. You can attach a hand-made drawing.
8. Look at all the offline search algorithms presented in chapter 3 plus A* search. Are they complete? Are they optimal? Explain why!
9. Assume that you had to go back and do lab 1 once more. How would you then use offline search to guide the agent's execution? If you already used search, would you change the search algorithm? Why, or why not?

Examination

Write a report where you present what has been done and which includes answers to the questions above and presents the results of all the tasks.

Note!

  • When testing the algorithms, it's best to start with small worlds first to maximize the chance of success!

  • Page responsible: Fredrik Heintz
    Last updated: 2010-01-26