Skip to main content

Introduction

Preliminaries

A search algorithm takes a search problem as input and returns a solution, or an indication of failure. We consider algorithms that superimpose a search tree over the state-space graph, forming various paths from the initial state, trying to find a path that reaches a goal state. Each node in the search tree corresponds to a state in the state space and the edges in the search tree correspond to actions.

The state space describes the (possibly infinite) set of states in the world, and the actions that allow transitions from one state to another.

We can expand the node, by considering the available ACTIONS for that state, using the RESULT function to see where those actions lead to, and generating a new node (called a child node or successor node) for each of the resulting states.

The frontier is the set of node that we could expand. Deciding which node to consider first is the essence of search.

We say that any state that has had a node generated for it has been reached (whether or not that node has been expanded)

Best-first search

A very general approach is called best-first search, in which we choose a node, with minimum value of some evaluation function, f(n).

On each iteration we choose a node on the frontier with minimum f(n) value, return it if its state is a goal state, and otherwise apply EXPAND to generate child nodes.

Each child node is added to the frontier if it has not been reached before, or is re-added if it is now being reached with a path that has a lower path cost than any previous path.

Redundant paths

The trees could present some repeated states generated by redundant paths. A cycle is a special case of a redundant path.

Algorithms that cannot remember the past are doomed to repeat it

There are three approaches to this issue:

  1. We can remember all previously reached states. This is appropriate for state spaces where there are many redundant paths
  2. We can not worry about repeating the past. Where it is rare or impossible for two paths to reach the same state
  3. Third, we can compromise and check for cycles, but not for redundant paths in general. We can check for cycles with no need for additional memory by following up the chain of parents to see if the state at the end of the path has appeared earlier in the path.

We call a search algorithm a graph search if it checks for redundant paths and a tree-like search if it does not check.

Measuring problem-solving performance

  • COMPLETENESS: Is the algorithm guaranteed to find a solution when there is one, and to correctly report failure when there is not?
  • COST OPTIMALITY: Does it find a solution with the lowest path cost of all solutions?
  • TIME COMPLEXITY: How long does it take to find a solution? This can be measured in seconds, or more abstractly by the number of states and actions considered.
  • SPACE COMPLEXITY: How much memory is needed to perform the search?

To be complete, a search algorithm must be systematic in the way it explores an infinite state space, making sure it can eventually reach any state that is connected to the initial state.

For an implicit state space, complexity can be measured in terms of d, the depth or number of actions in an optimal solution; m, the maximum number of actions in any path; b, and the branching factor or number of successors of a node that need to be considered.