Posted on 21.08.2009

Lately I have been reading about searching in game trees. So a friend of mine sent me the “Escape from Zurg” paper, which talks about problem solving by tree searching with Haskell.

In a large class of problems one is given a start state and some predicate for a desired final state. Moreover, the rules that dictate how a successor state is generated from a previous one are also given. Take a board game for instance, like tic-tac-toe. There is the initial state (which is the empty board), the predicate for a desired state (a state in which tree pieces of mine are aligned), and rules to create one state from another (the rules of the game). Another example of such problems is the “Escape from Zurg” one, that is stated as follows:

Buzz, Woody, Rex, and Hamm have to escape from Zurg. They merely have to cross one last bridge before they are free. However, the bridge is fragile and can hold at most two of them at the same time. Moreover, to cross the bridge a flashlight is needed to avoid traps and broken parts. The problem is that our friends have only one flashlight with one battery that lasts for only 60 minutes (this is not a typo:

sixty). The toys need different times to cross the bridge (in either direction):

TOY TIME Buzz 5 minutes Woody 10 minutes Rex 20 minutes Hamm 25 minutes Since there can be only two toys on the bridge at the same time, they cannot cross the bridge all at once. Since they need the flashlight to cross the bridge, whenever two have crossed the bridge, somebody has to go back and bring the flashlight to those toys on the other side that still have to cross the bridge. The problem now is: In which order can the four toys cross the bridge in time (that is, in 60 minutes) to be saved from Zurg?

This class of problems is usually solved by searching. The initial state and all the successors ultimately build a DAG (directed acyclic graph) that is called the *search space*. Some search spaces are small, like tic-tac-toe playing or even the “Escape from Zurg” puzzle. Others are large, like playing Reversi; even larger, like playing Chess; others are infinite. Needless to say that that are better searching ways than others. This same puzzle, for instance, was solved in Scheme by Ben Simon using the cool `amb`

operator. The `amb`

operator uses first-class continuations to backtrack and take another path in case the first one fails. It was an elegant solution for this puzzle, but it is not adequate for general searching because it backtracks blindly. There are others ways to search blindly, for instance *depth-first search* and *breadth-first search*.

In a depth-first search, we analyse the successors of a given state until there are no more successors to analyse, and then backtrack to a previous level of the tree. In a breadth-first search, we analyse the whole fringe of the tree (the *horizon nodes*) before expanding it one level further using the successor rules. This is good for infinite search spaces, because we will eventually find a solution if there is one. The implementation in Scheme is straightforward:

```
(define (depth-first states successors goal?)
(if (null? states)
#f
(let ((state (car states)))
(if (goal? state)
state
(depth-first (append (successors state)
(cdr states))
successors
goal?)))))
(define (breadth-first states successors goal?)
(if (null? states)
#f
(let ((state (car states)))
(if (goal? state)
state
(breadth-first (append (cdr states)
(successors state))
successors
goal?)))))
```

This is actually all we need for simple searches. Of course we need to supply the initial state, the procedure that generates successors and the goal predicate. These are specific for each kind of problem. Let’s see the Zurg problem procedures:

```
;; the type of the states found in our puzzle
(define-type state
(previous unprintable:) ;; previous state
near ;; toys in near shore
far ;; toys in far shore
flashlight ;; location of flashlight
time) ;; time consumed so far
(define (start-state)
(make-state #f
'(buzz woody rex hamm)
'()
'near
0))
(define (final-state? state)
(and (null? (state-near state))
(<= (state-time state) 60)))
;; at most two toys can travel across the bridge
;; the flashlight must be used in all crossings
(define (successors state)
(let ((orig (if (eqv? (state-flashlight state)
'near)
(state-near state)
(state-far state)))
(dest (if (eqv? (state-flashlight state)
'near)
(state-far state)
(state-near state))))
(map (lambda (toy-pair)
(let ((toy1 (car toy-pair))
(toy2 (cdr toy-pair)))
(make-next-state state
(remove toy2
(remove toy1 orig))
(ucons toy1
(cons toy2 dest))
(max (time-cost toy1)
(time-cost toy2)))))
(product orig orig))))
```

The previous procedures use some utilities:

```
(define (make-next-state state orig dest time)
(if (eqv? (state-flashlight state) 'near)
(make-state state orig dest 'far (+ (state-time state) time))
(make-state state dest orig 'near (+ (state-time state) time))))
;; the time each toy takes to cross the bridge
(define time-cost
(let ((costs '((buzz . 5)
(woody . 10)
(rex . 20)
(hamm . 25))))
(lambda (toy)
(let ((pair (assv toy costs)))
(and pair
(cdr pair))))))
(define (ucons a b)
(if (memv a b)
b
(cons a b)))
(define (product lis1 lis2)
(let lp1 ((lis1 lis1)
(res '()))
(if (null? lis1)
res
(let ((i1 (car lis1)))
(let lp2 ((lis2 lis2)
(res res))
(if (null? lis2)
(lp1 (cdr lis1) res)
(let ((i2 (car lis2)))
(if (member (cons i2 i1) res)
(lp2 (cdr lis2) res)
(lp2 (cdr lis2) (cons (cons i1 i2) res))))))))))
```

Our `successors`

procedure does not take time in account. So it is possible for a branch of the tree to go on indefinitely. To guarantee that we will arrive at a solution, we use a breadth-first search:

```
(define (print-path state)
(let loop ((state state)
(path '()))
(if state
(loop (state-previous state)
(cons state path))
(for-each println path))))
(print-path (breadth-first (list (start-state)) successors final-state?))
```

With the result:

```
#<state #14 near: (buzz woody rex hamm) far: () flashlight: near time: 0>
#<state #15 near: (rex hamm) far: (buzz woody) flashlight: far time: 10>
#<state #16 near: (woody rex hamm) far: (buzz) flashlight: near time: 20>
#<state #17 near: (woody) far: (rex hamm buzz) flashlight: far time: 45>
#<state #18 near: (buzz woody) far: (rex hamm) flashlight: near time: 50>
#<state #19 near: () far: (buzz woody rex hamm) flashlight: far time: 60>
```

If we change our `successors`

procedure a little, we can use depth-first search:

```
(define (successors2 state)
(let ((orig (if (eqv? (state-flashlight state)
'near)
(state-near state)
(state-far state)))
(dest (if (eqv? (state-flashlight state)
'near)
(state-far state)
(state-near state))))
(filter (lambda (s)
(<= (state-time s) 60)) ;; filtering out bad states
(map (lambda (toy-pair)
(let ((toy1 (car toy-pair))
(toy2 (cdr toy-pair)))
(make-next-state state
(remove toy2
(remove toy1 orig))
(ucons toy1
(cons toy2 dest))
(max (time-cost toy1)
(time-cost toy2)))))
(product orig orig)))))
(print-path (depth-first (list (start-state)) successors2 final-state?))
```

The advantage of changing the successors procedure is that depth-first searches are usually faster than breadth-first searches, in this case 483ms vs. 24630ms. Also, it uses less memory. So it is always a good idea to try to cut the search space as soon as possible.