AI Search Algorithms: BFS and DFS Pseudo-code Iterative Version

Warm-up as you walk in
Write the pseudo code for breadth first search and depth first search
Iterative version, not recursive
 
class TreeNode
  
TreeNode[] children()
  
boolean isGoal()
 
BFS(TreeNode start)…
 
DFS(TreeNode start)…
Announcements
 
If you are not on Piazza, Gradescope, and Canvas
E-mail me: pvirtue@cmu.edu
 
No class next Mon 1/21, MLK Holiday
Recitation starting this Fri 3pm, GHC 4401 (recommended)
Bring laptop if you can (not required)
Start P0 before recitation to make sure Python 3.6 is working for
you!
 
Reminder to be respectful of quiet areas in campus buildings
Announcements
Assignments:
HW1 (online)
Released 
at 4:30 pm today
Due Tue 1/22, 10 pm
P0: Python & Autograder Tutorial
Required, but worth zero points
Due Thu 1/24, 10 pm
No pairs, submit individually
Remaining programming assignments may be done in pairs
AI: Representation and Problem Solving
Agents and Search
Instructors: Pat Virtue & Stephanie Rosenthal
Slide credits: CMU AI, http://ai.berkeley.edu
Today
Agents and Environment
Search Problems
Uninformed Search Methods
Depth-First Search
Breadth-First Search
Uniform-Cost Search
Rationality, contd.
 
What is rati
onal depends on:
Performance measure
Agent’s prior knowledge of environment
Actions available to agent
Percept sequence to date
 
 
 
 
Being rati
onal means 
maximizing your expected utility
Rational Agents
 
Are rational agents 
omniscient
?
No – they are limited by the available percepts
Are rational agents 
clairvoyant
?
No – they may lack knowledge of the environment dynamics
Do rational agents 
explore
 and 
learn
?
Yes – in unknown environments these are essential
So rational agents are not necessarily successful, but they are
autonomous
 (i.e., transcend initial program)
Task Environment - PEAS
 
Performance measure
-1 per step; +10 food; +500 win; -500 die;
+200 hit scared ghost
Environment
Pacman dynamics (incl ghost behavior)
Actuators
North, South, East, West, (Stop)
Sensors
Entire state is visible
PEAS: Automated Taxi
 
Performance measure
Income, happy customer, vehicle costs, fines,
insurance premiums
Environment
US streets, other drivers, customers
Actuators
Steering, brake, gas, display/speaker
Sensors
Camera, radar, accelerometer, engine sensors,
microphone
Image: http://nypost.com/2014/06/21/how-google-might-put-taxi-drivers-out-of-business/
Environment Types
Reflex Agents
Reflex agents:
Choose action based on current percept
(and maybe memory)
May have memory or a model of the
world’s current state
Do not consider the future consequences of
their actions
Consider how the world IS
Can a reflex agent be rational?
[Demo: reflex optimal (L2D1)]
[Demo: reflex optimal (L2D2)]
Demo Reflex Agent
[Demo: reflex optimal (L2D1)]
[Demo: reflex optimal (L2D2)]
Agents that Plan Ahead
Planning agents:
Decisions based on 
predicted consequences 
of actions
Must have a
 
transition model
: how the world evolves
in response to actions
Must formulate a goal
Consider how the world WOULD BE
Spectrum of deliberativeness:
Generate complete, optimal plan offline, then execute
Generate a simple, greedy plan, start executing, replan
when something goes wrong
Search Problems
Search Problems
 
A 
search
 
problem
 consists of:
 
A state space
 
For each state, a set
    Actions(s) of allowable actions
 
A transition model Result(s,a)
 
A step cost function c(s,a,s’)
 
A start state and a goal test
 
A 
solution
 is a sequence of actions (a plan) which transforms
the start state to a goal state
Search Problems Are Models
 
Example: Travelling in Romania
 
State space:
Cities
Actions:
Go to adjacent city
Transition model
Result(A, Go(B)) = B
Step cost
Distance along road link
Start state:
Arad
Goal test:
Is state == Bucharest?
Solution?
 
What’s in a State Space?
 
Problem: Pathing
State representation: (x,y) location
Actions: NSEW
Transition model: update location
Goal test: is (x,y)=END
 
Problem: Eat-All-Dots
State representation: {(x,y), dot booleans}
Actions: NSEW
Transition model: update location and
possibly a dot boolean
Goal test: dots all false
The 
real world state
 includes every last detail of the environment
 
A 
search state
 abstracts away details not needed to solve the problem
State Space Sizes?
 
World state:
Agent positions: 120
Food count: 30
Ghost positions: 12
Agent facing: NSEW
How many
World states?
 
120x(2
30
)x(12
2
)x4
States for pathing?
 
120
States for eat-all-dots?
 
120x(2
30
)
Safe Passage
 
Problem: eat all dots while keeping the ghosts perma-scared
What does the state representation have to specify?
(agent position, dot booleans, power pellet booleans, remaining scared time)
State Space Graphs and Search Trees
State Space Graphs
 
State space graph: A mathematical
representation of a search problem
Nodes are (abstracted) world configurations
Arcs represent transitions resulting from actions
The goal test is a set of goal nodes (maybe only one)
 
In a state space graph, each state occurs only
once!
 
We can rarely build this full graph in memory (it’s
too big), but it’s a useful idea
 
More Examples
More Examples
State Space Graphs vs. Search Trees
S
G
b
a
Consider this 4-state graph:
 
Important: Lots of repeated structure in the search tree!
 
How big is its search tree (from S)?
 
Tree Search vs Graph Search
 
function
 
TREE_SEARCH
(
problem
) 
returns
 
a solution, or failure
     initialize the 
frontier
 as a specific work list (stack, queue, priority queue)
     add initial state of 
problem 
to
 frontier
     
loop do
             
if
 
the
 frontier 
is empty 
then
                     
return
 
failure
             choose a 
node
 and remove it from the 
frontier
             
if
 
the 
node
 contains a goal state 
then
                     
return
 
the corresponding solution
             for each resulting 
child
 from node
                     add 
child
 to the 
frontier
function
 
GRAPH_SEARCH
(
problem
) 
returns
 
a solution, or failure
     
initialize the 
explored set
 to be empty
     initialize the 
frontier
 as a specific work list (stack, queue, priority queue)
     add initial state of 
problem 
to
 frontier
     
loop do
             
if
 
the
 frontier 
is empty 
then
                     
return
 
failure
             choose a 
node
 and remove it from the 
frontier
             
if
 
the 
node
 contains a goal state 
then
                     
return
 
the corresponding solution
             
add the 
node 
state to the 
explored set
             for each resulting 
child
 from node
                     
if the 
child 
state is not already in the 
frontier
 or 
explored set 
then
                             add 
child
 to the 
frontier
Piazza Poll
What is the relationship between these sets of states
after each loop iteration in 
GRAPH_SEARCH
?
(Loop invariants!!!)
Piazza Poll
function
 
GRAPH-SEARCH
(
problem
) 
returns
 
a solution, or failure
     
initialize the 
explored set
 to be empty
     initialize the 
frontier
 as a specific work list (stack, queue, priority queue)
     add initial state of 
problem 
to
 frontier
     
loop do
             
if
 
the
 frontier 
is empty 
then
                     
return
 
failure
             choose a 
node
 and remove it from the 
frontier
             
if
 
the 
node
 contains a goal state 
then
                     
return
 
the corresponding solution
             
add the 
node 
state to the 
explored set
             for each resulting 
child
 from node
                     
if the 
child 
state is not already in the 
frontier
 or 
explored set 
then
                             add 
child
 to the 
frontier
Graph Search
This graph search algorithm overlays a tree on a graph
The 
frontier
 states separate the 
explored
 states from 
never seen 
states
Images: AIMA, Figure 3.8, 3.9
BFS vs DFS
Piazza Poll
Is the following demo using BFS or DFS
[Demo: dfs/bfs maze water (L2D6)]
A Note on Implementation
Nodes have
 
state
, 
parent
, 
action
, 
path-cost
 
A child of 
node
 
by action 
a
 
has
state
         
=
   
result
(
node
.
state
,
a
)
parent
      
=
   
node
action
       
=
   
a
path-cost 
=
   
node
.
path_cost 
 
+
 
step_cost
(
node
.
state
,
 
a
,
 self.
state
)
 
Extract solution by tracing back parent pointers, collecting actions
Walk-through DFS Graph Search
BFS vs DFS
When will BFS outperform DFS?
When will DFS outperform BFS?
Search Algorithm Properties
 
Search Algorithm Properties
 
Complete: Guaranteed to find a solution if one exists?
Optimal: Guaranteed to find the least cost path?
Time complexity?
Space complexity?
 
Cartoon of search tree:
b is the branching factor
m is the maximum depth
solutions at various depths
 
Number of nodes in entire tree?
1 + b + b
2
 + …. b
m
 = O(b
m
)
 
 
b
 
1 node
 
b nodes
 
b
2
 nodes
 
b
m
 nodes
 
m tiers
Search Algorithm Properties
Complete: Guaranteed to find a solution if one exists?
Optimal: Guaranteed to find the least cost path?
Time complexity?
Space complexity?
Cartoon of search tree:
b is the branching factor
m is the maximum depth
solutions at various depths
Number of nodes in entire tree?
1 + b + b
2
 + …. b
m
 = O(b
m
)
b
1 node
b nodes
b
2
 nodes
b
m
 nodes
m tiers
Are these the properties for BFS or DFS?
 Takes O(b
m
) time
 Uses O(bm) space on frontier
 Complete with graph search
 Not optimal unless all goals are in the same level
(and the same step cost everywhere)
Piazza Poll
Depth-First Search (DFS) Properties
 
What nodes does DFS expand?
Some left prefix of the tree.
Could process the whole tree!
If m is finite, takes time O(b
m
)
 
How much space does the frontier take?
Only has siblings on path to root, so O(bm)
 
Is it complete?
m could be infinite, so only if we prevent cycles
(graph search)
 
Is it optimal?
No, it finds the “leftmost” solution, regardless of
depth or cost
Breadth-First Search (BFS) Properties
 
What nodes does BFS expand?
Processes all nodes above shallowest solution
Let depth of shallowest solution be s
Search takes time O(b
s
)
 
How much space does the frontier take?
Has roughly the last tier, so O(b
s
)
 
Is it complete?
s must be finite if a solution exists, so yes!
 
Is it optimal?
Only if costs are all the same (more on costs later)
b
1 node
b nodes
b
2
 nodes
b
m
 nodes
 
s tiers
 
b
s
 nodes
Iterative Deepening
b
 
Idea: get DFS’s space advantage with BFS’s
time / shallow-solution advantages
Run a DFS with depth limit 1.  If no solution…
Run a DFS with depth limit 2.  If no solution…
Run a DFS with depth limit 3.  …..
 
Isn’t that wastefully redundant?
Generally most work happens in the lowest level
searched, so not so bad!
Finding a Least-Cost Path
Depth-First (Tree) Search
r
Strategy: expand a
deepest node first
Implementation:
Frontier is a LIFO stack
Breadth-First (Tree) Search
Strategy: expand a
shallowest node first
Implementation:
Frontier is a FIFO queue
Uniform Cost (Tree) Search
Strategy: expand a cheapest
node first:
Frontier is a priority queue
(priority: cumulative cost
)
 
3
 
9
 
1
 
16
 
4
 
11
 
5
 
7
 
13
 
8
 
10
 
11
 
17
 
11
0
 
6
3
9
1
1
2
8
8
2
15
1
2
 
Cost
contours
2
Uniform Cost Search
function
 
GRAPH_SEARCH
(
problem
) 
returns
 
a solution, or failure
     initialize the 
explored set
 to be empty
     initialize the 
frontier
 as a specific work list (stack, queue, priority queue)
     add initial state of 
problem 
to
 frontier
     
loop do
             
if
 
the
 frontier 
is empty 
then
                     
return
 
failure
             choose a 
node
 and remove it from the 
frontier
             
if
 
the 
node
 contains a goal state 
then
                     
return
 
the corresponding solution
             add the 
node 
state to the 
explored set
             for each resulting 
child
 from node
                     if the 
child 
state is not already in the 
frontier
 or 
explored set 
then
                             add 
child
 to the 
frontier
function
 
UNIFORM-COST-SEARCH
(
problem
) 
returns
 
a solution, or failure
     initialize the 
explored set
 to be empty
     initialize the 
frontier
 as a 
priority queue using node path_cost as the priority
     add initial state of 
problem 
to
 frontier
 
with path_cost = 0
     
loop do
             
if
 
the
 frontier 
is empty 
then
                     
return
 
failure
             choose a 
node
 and remove it from the 
frontier
             
if
 
the 
node
 contains a goal state 
then
                     
return
 
the corresponding solution
             add the 
node 
state to the 
explored set
             for each resulting 
child
 from node
                     if the 
child 
state is not already in the 
frontier
 or 
explored set 
then
                             add 
child
 to the 
frontier
                     
else if the 
child
 is already in the 
frontier
 with higher path_cost
 
then
                             replace that 
frontier
 node with
 child
Walk-through UCS
Uniform Cost Search (UCS) Properties
 
What nodes does UCS expand?
Processes all nodes with cost less than cheapest solution!
If that solution costs 
C* 
and arcs cost at least 
 
,
 
then the
“effective depth” is roughly 
C*/
Takes time O(b
C*/
) (exponential in effective depth)
 
How much space does the frontier take?
Has roughly the last tier, so O(b
C*/
)
 
Is it complete?
Assuming best solution has a finite cost and minimum arc
cost is positive, yes!
 
Is it optimal?
Yes!  (Proof next lecture via A*)
b
C*/
  “tiers”
c 
 3
c 
 2
c 
 1
Uniform Cost Issues
 
Remember:
UCS explores increasing cost contours
 
The good:
UCS is complete and optimal!
 
The bad:
Explores options in every “direction”
No information about goal location
 
We’ll fix that soon!
 
Start
 
Goal
Slide Note
Embed
Share

Explore the iterative versions of Breadth First Search (BFS) and Depth First Search (DFS) with pseudo-code examples implemented for class TreeNode. Understand the concept of TreeNode, children() function, isGoal() method, and apply BFS and DFS starting from TreeNode start.

  • Search Algorithms
  • Iterative Pseudo-code
  • TreeNode
  • BFS
  • DFS

Uploaded on Sep 12, 2024 | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. Download presentation by click this link. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

E N D

Presentation Transcript


  1. Warm-up as you walk in Write the pseudo code for breadth first search and depth first search Iterative version, not recursive class TreeNode TreeNode[] children() boolean isGoal() BFS(TreeNode start) DFS(TreeNode start)

  2. Announcements If you are not on Piazza, Gradescope, and Canvas E-mail me: pvirtue@cmu.edu No class next Mon 1/21, MLK Holiday Recitation starting this Fri 3pm, GHC 4401 (recommended) Bring laptop if you can (not required) Start P0 before recitation to make sure Python 3.6 is working for you! Reminder to be respectful of quiet areas in campus buildings

  3. Announcements Assignments: HW1 (online) Released at 4:30 pm today Due Tue 1/22, 10 pm P0: Python & Autograder Tutorial Required, but worth zero points Due Thu 1/24, 10 pm No pairs, submit individually Remaining programming assignments may be done in pairs

  4. AI: Representation and Problem Solving Agents and Search Instructors: Pat Virtue & Stephanie Rosenthal Slide credits: CMU AI, http://ai.berkeley.edu

  5. Today Agents and Environment Search Problems Uninformed Search Methods Depth-First Search Breadth-First Search Uniform-Cost Search

  6. Rationality, contd. What is rational depends on: Performance measure Agent s prior knowledge of environment Actions available to agent Percept sequence to date Being rational means maximizing your expected utility

  7. Rational Agents Are rational agents omniscient? No they are limited by the available percepts Are rational agents clairvoyant? No they may lack knowledge of the environment dynamics Do rational agents explore and learn? Yes in unknown environments these are essential So rational agents are not necessarily successful, but they are autonomous (i.e., transcend initial program)

  8. Task Environment - PEAS Performance measure -1 per step; +10 food; +500 win; -500 die; +200 hit scared ghost Environment Pacman dynamics (incl ghost behavior) Actuators North, South, East, West, (Stop) Sensors Entire state is visible

  9. PEAS: Automated Taxi Performance measure Income, happy customer, vehicle costs, fines, insurance premiums Environment US streets, other drivers, customers Actuators Steering, brake, gas, display/speaker Sensors Camera, radar, accelerometer, engine sensors, microphone Image: http://nypost.com/2014/06/21/how-google-might-put-taxi-drivers-out-of-business/

  10. Environment Types Pacman Taxi Fully or partially observable Single agent or multi-agent Deterministic or stochastic Static or dynamic Discrete or continuous

  11. Reflex Agents Reflex agents: Choose action based on current percept (and maybe memory) May have memory or a model of the world s current state Do not consider the future consequences of their actions Consider how the world IS Can a reflex agent be rational? [Demo: reflex optimal (L2D1)] [Demo: reflex optimal (L2D2)]

  12. Demo Reflex Agent [Demo: reflex optimal (L2D1)] [Demo: reflex optimal (L2D2)]

  13. Agents that Plan Ahead Planning agents: Decisions based on predicted consequences of actions Must have a transition model: how the world evolves in response to actions Must formulate a goal Consider how the world WOULD BE Spectrum of deliberativeness: Generate complete, optimal plan offline, then execute Generate a simple, greedy plan, start executing, replan when something goes wrong

  14. Search Problems

  15. Search Problems A search problem consists of: A state space For each state, a set Actions(s) of allowable actions {N, E} 1 N A transition model Result(s,a) E 1 A step cost function c(s,a,s ) A start state and a goal test A solution is a sequence of actions (a plan) which transforms the start state to a goal state

  16. Search Problems Are Models

  17. Example: Travelling in Romania State space: Cities Actions: Go to adjacent city Transition model Result(A, Go(B)) = B Step cost Distance along road link Start state: Arad Goal test: Is state == Bucharest? Solution? Oradea 71 Neamt 87 Zerind 151 75 Iasi Arad 140 92 Sibiu Fagaras 99 118 Vaslui 80 Rimnicu Vilcea Timisoara 142 211 111 Pitesti Lugoj 97 70 98 Hirsova 85 146 Mehadia 101 Urziceni 86 75 138 Bucharest 120 Drobeta 90 Eforie Craiova Giurgiu

  18. Whats in a State Space? The real world state includes every last detail of the environment A search state abstracts away details not needed to solve the problem Problem: Pathing State representation: (x,y) location Actions: NSEW Transition model: update location Goal test: is (x,y)=END Problem: Eat-All-Dots State representation: {(x,y), dot booleans} Actions: NSEW Transition model: update location and possibly a dot boolean Goal test: dots all false

  19. State Space Sizes? World state: Agent positions: 120 Food count: 30 Ghost positions: 12 Agent facing: NSEW How many World states? 120x(230)x(122)x4 States for pathing? 120 States for eat-all-dots? 120x(230)

  20. Safe Passage Problem: eat all dots while keeping the ghosts perma-scared What does the state representation have to specify? (agent position, dot booleans, power pellet booleans, remaining scared time)

  21. State Space Graphs and Search Trees

  22. State Space Graphs State space graph: A mathematical representation of a search problem Nodes are (abstracted) world configurations Arcs represent transitions resulting from actions The goal test is a set of goal nodes (maybe only one) In a state space graph, each state occurs only once! We can rarely build this full graph in memory (it s too big), but it s a useful idea

  23. More Examples Oradea 71 Neamt 87 Zerind 151 75 Iasi Arad 140 92 Sibiu Fagaras 99 118 Vaslui 80 Rimnicu Vilcea Timisoara 142 211 111 Pitesti Lugoj 97 70 98 Hirsova 85 146 Mehadia 101 Urziceni 86 75 138 Bucharest 120 Drobeta 90 Eforie Craiova Giurgiu

  24. More Examples R L R L S S R R L R L R L L S S S S R L R L S S

  25. State Space Graphs vs. Search Trees How big is its search tree (from S)? Consider this 4-state graph: S a b a G S G b G a b G a b G Important: Lots of repeated structure in the search tree!

  26. Tree Search vs Graph Search

  27. functionTREE_SEARCH(problem) returnsa solution, or failure initialize the frontier as a specific work list (stack, queue, priority queue) add initial state of problem to frontier loop do ifthe frontier is empty then returnfailure choose a node and remove it from the frontier ifthe node contains a goal state then returnthe corresponding solution for each resulting child from node add child to the frontier

  28. functionGRAPH_SEARCH(problem) returnsa solution, or failure initialize the explored set to be empty initialize the frontier as a specific work list (stack, queue, priority queue) add initial state of problem to frontier loop do ifthe frontier is empty then returnfailure choose a node and remove it from the frontier ifthe node contains a goal state then returnthe corresponding solution add the node state to the explored set for each resulting child from node if the child state is not already in the frontier or explored set then add child to the frontier

  29. Piazza Poll What is the relationship between these sets of states after each loop iteration in GRAPH_SEARCH? (Loop invariants!!!) A B C Explored Never Seen Explored Never Seen Explored Never Seen Frontier Frontier Frontier

  30. Piazza Poll functionGRAPH-SEARCH(problem) returnsa solution, or failure initialize the explored set to be empty initialize the frontier as a specific work list (stack, queue, priority queue) add initial state of problem to frontier loop do ifthe frontier is empty then returnfailure choose a node and remove it from the frontier ifthe node contains a goal state then returnthe corresponding solution add the node state to the explored set for each resulting child from node if the child state is not already in the frontier or explored set then add child to the frontier

  31. Graph Search This graph search algorithm overlays a tree on a graph The frontier states separate the explored states from never seen states Images: AIMA, Figure 3.8, 3.9

  32. BFS vs DFS

  33. Piazza Poll Is the following demo using BFS or DFS [Demo: dfs/bfs maze water (L2D6)]

  34. A Note on Implementation Nodes have state, parent, action, path-cost PARENT A child of node by action a has state = result(node.state,a) parent = node action = a path-cost = node.path_cost + step_cost(node.state, a, self.state) Node ACTION = Right PATH-COST = 6 5 5 4 4 6 6 1 1 8 8 STATE 7 7 3 3 2 2 Extract solution by tracing back parent pointers, collecting actions

  35. G Walk-through DFS Graph Search a c b e d f S h p r q

  36. BFS vs DFS When will BFS outperform DFS? When will DFS outperform BFS?

  37. Search Algorithm Properties

  38. Search Algorithm Properties Complete: Guaranteed to find a solution if one exists? Optimal: Guaranteed to find the least cost path? Time complexity? Space complexity? 1 node b nodes b2 nodes b Cartoon of search tree: b is the branching factor m is the maximum depth solutions at various depths m tiers bm nodes Number of nodes in entire tree? 1 + b + b2+ . bm = O(bm)

  39. Search Algorithm Properties Complete: Guaranteed to find a solution if one exists? Optimal: Guaranteed to find the least cost path? Time complexity? Space complexity? 1 node b nodes b2 nodes b Cartoon of search tree: b is the branching factor m is the maximum depth solutions at various depths m tiers bm nodes Number of nodes in entire tree? 1 + b + b2+ . bm = O(bm)

  40. Piazza Poll 1 node b nodes b2 nodes b Are these the properties for BFS or DFS? m tiers Takes O(bm) time Uses O(bm) space on frontier bm nodes Complete with graph search Not optimal unless all goals are in the same level (and the same step cost everywhere)

  41. Depth-First Search (DFS) Properties What nodes does DFS expand? Some left prefix of the tree. Could process the whole tree! If m is finite, takes time O(bm) 1 node b nodes b2 nodes b How much space does the frontier take? Only has siblings on path to root, so O(bm) m tiers Is it complete? m could be infinite, so only if we prevent cycles (graph search) bm nodes Is it optimal? No, it finds the leftmost solution, regardless of depth or cost

  42. Breadth-First Search (BFS) Properties What nodes does BFS expand? Processes all nodes above shallowest solution Let depth of shallowest solution be s Search takes time O(bs) 1 node b nodes b2 nodes b s tiers How much space does the frontier take? Has roughly the last tier, so O(bs) bs nodes Is it complete? s must be finite if a solution exists, so yes! bm nodes Is it optimal? Only if costs are all the same (more on costs later)

  43. Iterative Deepening Idea: get DFS s space advantage with BFS s time / shallow-solution advantages Run a DFS with depth limit 1. If no solution Run a DFS with depth limit 2. If no solution Run a DFS with depth limit 3. .. b Isn t that wastefully redundant? Generally most work happens in the lowest level searched, so not so bad!

  44. Finding a Least-Cost Path GOAL a 2 2 c b 3 2 1 8 2 e d 3 f 9 8 2 START h 4 2 1 4 p r 15 q

  45. Depth-First (Tree) Search G Strategy: expand a deepest node first a a c c b b e e Implementation: Frontier is a LIFO stack d d f f S h h p p r r q q S e p d q e h r b c h r p q f a a q c G p q f a q c G a

  46. Breadth-First (Tree) Search G a Strategy: expand a shallowest node first c b e Implementation: Frontier is a FIFO queue d f S h p r q S e p d Search q e h r b c Tiers h r p q f a a q c G p q f a q c G a

  47. Uniform Cost (Tree) Search 2 G a Strategy: expand a cheapest node first: c b 8 1 2 2 e 3 d f 2 9 Frontier is a priority queue (priority: cumulative cost) 8 S h 1 1 p r q 15 0 S 1 9 e p 3 d 16 q 11 5 17 e h r 4 b c 11 Cost contours 7 6 13 h r p q f a a q c 8 G p q f a q c 11 10 G a

  48. Uniform Cost Search

  49. functionGRAPH_SEARCH(problem) returnsa solution, or failure initialize the explored set to be empty initialize the frontier as a specific work list (stack, queue, priority queue) add initial state of problem to frontier loop do ifthe frontier is empty then returnfailure choose a node and remove it from the frontier ifthe node contains a goal state then returnthe corresponding solution add the node state to the explored set for each resulting child from node if the child state is not already in the frontier or explored set then add child to the frontier

  50. functionUNIFORM-COST-SEARCH(problem) returnsa solution, or failure initialize the explored set to be empty initialize the frontier as a priority queue using node path_cost as the priority add initial state of problem to frontier with path_cost = 0 loop do ifthe frontier is empty then returnfailure choose a node and remove it from the frontier ifthe node contains a goal state then returnthe corresponding solution add the node state to the explored set for each resulting child from node if the child state is not already in the frontier or explored set then add child to the frontier else if the child is already in the frontier with higher path_cost then replace that frontier node with child

More Related Content

giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#