Constraint Satisfaction Problems in AI

undefined
CHAPTER 6
OLIVER SCHULTE
SUMMER2011
Constraint Satisfaction
Problems
Midterm Announcements
Please bring ID to the exam.
You can bring a cheat sheet.
Chapters 1-6 covered, see Lecture Schedule.
CSP definition is covered, also arc consistency.
Not covered
: CSP search methods (topic
today).
No calculators, smartphones, textbook, notes.
Be on time.
Read the instructions ahead of time – posted on
the web.
Talk Announcement
CS SEMINAR
---------------
June 23rd, 2:30 p.m., TASC1 9204W
Generalized Planning: Theory and Practice
Hector J. Levesque
ABSTRACT:
While most of the research in automated planning within AI has focussed on
methods for finding sequential (or straight-line) plans, there is growing
interest in a more general account, where the plans may need to contain
branches and loops.  In this talk, I will present some recent work in this
area, including some new theoretical results, as well as a description of a
new planning system called FSAPLANNER and some of the generalized planning
problems it can solve.
Outline
CSP examples
Backtracking search for CSPs
Problem structure and problem decomposition
Local search for CSPs
Field Institute Summer Workshops
Environment Type Discussed In this Lecture
Static Environment
 
CMPT 310 - Blind Search
5
Fully
Observable
Deterministic
Sequential
yes
yes
Discrete
Discrete
yes
Planning,
heuristic
search
yes
Control,
cybernetics
no
no
Continuous Function
Optimization
Vector Search:
Constraint
Satisfaction
no
yes
Agent Architecture Discussed In this Lecture
 Graph-Based Search: State is 
black box
, no internal
structure, atomic.
 Factored Representation: State is list or vector of 
facts
.
 CSP: a fact is of the form “Variable = value”.
A model is a
structured
representation of
the world.
Constraint satisfaction problems (CSPs)
CSP:
state is defined by variables X
i
 with values from domain D
i
goal test is a set of constraints specifying 
allowable
combinations of values for subsets of variables.
Allows useful general-purpose algorithms with more
power than standard search algorithms.
Power close to simulating Turing Machines.
Example: Map-Coloring
CSPs (continued)
An assignment is 
complete
 when every variable is mentioned.
A 
solution
 to a CSP is a complete assignment that satisfies all
constraints.
Some CSPs require a solution that maximizes an 
objective function
.
Constraints with continuous variables are common.
Linear Constraints 
 
linear programming.
Examples of Applications:
Airline schedules
Final Exam Scheduling.
Cryptography
Sudoku, cross-words.
Example: Map-Coloring contd.
 
Varieties of constraints
Unary constraints involve a single variable,
e.g., SA 6= green
Binary constraints involve pairs of variables,
e.g., SA <> WA
Higher-order constraints involve 3 or more variables
Preferences (soft constraints), e.g., red is better than
green
 
often representable by a cost for each variable
assignment
constrained optimization problems
Constraint graph
Binary CSP: each constraint relates at most two
variables
Constraint graph: nodes are variables, arcs show
constraints
General-purpose CSP algorithms use the graph
structure
to speed up search. E.g., Tasmania is an independent
subproblem!
Graphs and Factored Representations
UBC AI Space CSP
Graphs for variables (concepts, facts) capture 
local
dependencies 
between variables (concepts, facts).
Absence of edges = independence.
AI systems try to reason locally as much as possible.
Potential Solution to the Relevance Problem:
How does the brain retrieve relevant facts in a given situation, out of
the million facts that it knows?
Answer: Direct links represent direct relevance.
Computation in general is a local process operating on a
factored state. (What is the state of a program run?)
c
a
d
e
b
Consider the constraint graph on the right.
The domain for every variable is [1,2,3,4].
There are 2 unary constraints:
- variable “a” cannot take values 3 and 4.
-
 variable “b” cannot take value 4.
There are 8 binary constraints stating that variables 
connected by an edge cannot have the same value.
 
Problem
Example: 4-Queens Problem
5 Queen's Problem
Arc consistency
An Arc X 
 
Y
 is consistent if
  
for 
every
 value 
x
 of 
X
 there is some value 
y
 consistent with 
x
     (note that this is a directed property)
Consider state of search after WA and Q are assigned:
    SA 
 
NSW
 is consistent if
  
SA=blue
 and 
NSW=red
Arc consistency
X 
 
Y
 is consistent if
  
for 
every
 value 
x
 of 
X
 there is some value 
y
 consistent with 
x
NSW 
 
SA
 is consistent if
  
NSW=red
 and 
SA=blue
  
NSW=blue and SA=???
Arc consistency
Can enforce arc-consistency:
  
Arc can be made consistent by removing 
blue
 from 
NSW
Continue to propagate constraints….
Check 
V 
 
NSW
Not consistent for V = red
Remove red from 
V
Arc consistency
 
Continue to propagate constraints….
SA 
 
NT
 is not consistent
and cannot be made consistent
Standard search formulation (incremental)
Let’s formulate a state graph problem, then take
advantage of special structure later.
States are defined by the values assigned so far
Initial state: the empty assignment, { }
Successor function: assign a value to an unassigned variable
that does not conflict with current assignment.
 fail if no legal assignments (not fixable!)
Goal test: the current assignment is complete
 This is the same for all CSPs!
Standard search formulation (incremental)
Can we use breadth first search?
Branching factor at top level?
nd
 any of the d values can be assigned to any variable
Next level?
(n-1)d
We generate n!.d
n
 leaves even though there are d
n
 complete
assignments. Why?
Commutatively
If the order of applications  on any given set of actions has no
effect on the outcome.
Backtracking search
Variable assignments are commutative, i.e.,
[WA=red then NT =green] same as [NT =green thenWA=red]
Only need to consider assignments to a single variable at
each node
 
b=d and there are d
n
 leaves
Depth-first search for CSPs with single-variable
assignments is called 
backtracking search
Is this uninformed or informed?
Backtracking search is the basic uninformed algorithm for CSPs
Backtracking example
4 Feb 2004
CS 3243 - Constraint Satisfaction
23
Backtracking example
4 Feb 2004
CS 3243 - Constraint Satisfaction
24
Backtracking example
4 Feb 2004
CS 3243 - Constraint Satisfaction
25
Backtracking example
4 Feb 2004
CS 3243 - Constraint Satisfaction
26
 
 
Improving backtracking efficiency
4 Feb 2004
CS 3243 - Constraint Satisfaction
28
 
General-purpose
 methods can give huge gains in speed:
Which variable should be assigned next?
In what order should its values be tried?
Can we detect inevitable failure early?
 
Constraint Learning: Can we keep track of what search has learned?
 
Can we take advantage of problem structure?
Most constrained variable
4 Feb 2004
CS 3243 - Constraint Satisfaction
29
Most constrained variable:
choose the variable with the fewest legal values
a.k.a. 
minimum remaining values (MRV)
 heuristic
Only picks a variable (Not a value)
Demo for MRV
 
Most constraining variable
4 Feb 2004
CS 3243 - Constraint Satisfaction
30
How to choose between the variable with the fewest
legal values?
Tie-breaker among most constrained variables
Degree heuristic: choose the variable with the 
most
constraints 
on remaining variables
Least constraining value
4 Feb 2004
CS 3243 - Constraint Satisfaction
31
Given a variable, choose the least constraining value:
the one that rules out the fewest values in the
remaining variables.
Intuition: choose “most likely” solution.
Combining these heuristics makes 1000 queens
feasible
Forward checking
4 Feb 2004
CS 3243 - Constraint Satisfaction
32
Idea
:
Keep track of remaining legal values for unassigned variables
Terminate search when any variable has no legal values
Forward checking
4 Feb 2004
CS 3243 - Constraint Satisfaction
33
Idea
:
Keep track of remaining legal values for unassigned variables
Terminate search when any variable has no legal values
Forward checking
4 Feb 2004
CS 3243 - Constraint Satisfaction
34
Idea
:
Keep track of remaining legal values for unassigned variables
Terminate search when any variable has no legal values
Forward checking
4 Feb 2004
CS 3243 - Constraint Satisfaction
35
Idea
:
Keep track of remaining legal values for unassigned variables
Terminate search when any variable has no legal values
Constraint propagation
4 Feb 2004
CS 3243 - Constraint Satisfaction
36
NT and SA cannot both be blue!
Constraint propagation
 repeatedly enforces constraints locally.
Use arc-consistency.
Inference
 complements search.
Has to be faster than searching
Example: 4-Queens Problem
Example: 4-Queens Problem
Example: 4-Queens Problem
Example: 4-Queens Problem
Example: 4-Queens Problem
Example: 4-Queens Problem
Example: 4-Queens Problem
Example: 4-Queens Problem
Example: 4-Queens Problem
Example: 4-Queens Problem
Arc Consistency vs. Search
Arc Consistency speeds up search, but is in itself not
a complete search procedure.
Example: Simple Problem 2 in AI space.
Arc Consistency does not detect that the problem is
insoluble.
Constraint propagation
Techniques like CP and FC are in effect eliminating
parts of the search space
Inference
 complements search (= simulation).
Constraint propagation goes further than FC by
repeatedly enforcing constraints locally.
Arc-consistency (AC) is a systematic procedure for
Constraint propagation (Macworth 1977 UBC).
Arc consistency checking
Can be run as a preprocessor or after each assignment
Or as preprocessing before search starts
AC must be run repeatedly until no inconsistency remains
Trade-off
Requires some overhead to do, but generally more effective than direct
search
In effect it can eliminate large (inconsistent) parts of the state space
more effectively than search can
Need a systematic method for arc-checking
If 
X
 loses a value, neighbors of 
X
 need to be rechecked.
    
  
Arc consistency checking
 
Arc-consistency as message-passing
This is a propagation algorithm. It’s like sending 
messages
to neighbors on the graph. How do we 
schedule
 these
messages?
Every time a domain changes, all incoming messages need
to be re-sent. Repeat until convergence 
 no message will
change any domains.
Since we only remove values from domains when they can
never be part of a solution, an empty domain means no
solution possible at all 
 back out of that branch.
Example
5-queen’s problem in AIspace.
Apply the heuristics:
Use variables with minimum remaining values.
Use variable with maximum degree.
Use least constraining value.
Use arc-consistency after choosing each value.
K-consistency: bounded local checks
Arc consistency does not detect all inconsistencies:
Partial assignment 
{WA=red, NSW=red}
 is inconsistent.
Stronger forms of propagation can be defined using the notion of k-consistency.
 A CSP is k-consistent if for any set of k-1 variables and for any consistent
assignment to those variables, a consistent value can always be assigned to any
kth variable.
E.g. 1-consistency = node-consistency
E.g. 2-consistency = arc-consistency
E.g. 3-consistency = path-consistency
Strongly k-consistent:
k-consistent for all values  {k, k-1, …2, 1}
Trade-offs
Running stronger consistency checks…
Takes more time
But will reduce branching factor and detect more inconsistent
partial assignments
No “free lunch”
In worst case n-consistency takes exponential time
Back-tracking or back-jumping?
{Q=red , NSW= green, V= blue, T=red}
 
red
 
green
 
blue
 
red
 
?
 
blue
 
green
Local search for CSPs
Use complete-state representation
Initial state = all variables assigned values
Successor states = change 1 (or more) values
For CSPs
allow states with unsatisfied constraints (unlike backtracking)
operators 
reassign
 variable values
hill-climbing with n-queens is an example
Variable selection: randomly select any conflicted variable.
Local Stochastic Search Demo
Value selection: 
min-conflicts heuristic
Select new value that results in a minimum number of conflicts with the
other variables
Min-conflicts example 1
Use of min-conflicts heuristic in hill-climbing.
h=5
h=3
h=1
Min-conflicts example 2
A two-step solution for an 8-queens problem using min-conflicts heuristic
At each stage a queen is chosen for reassignment in its column
The algorithm moves the queen to the min-conflict square breaking ties
randomly.
Local search for CSP
function
 MIN-CONFLICTS(
csp, max_steps
) 
return
 solution or failure
 
inputs
: 
csp
, a constraint satisfaction problem
  
max_steps
, the number of steps allowed before giving up
 
 
current
 
  
 
an initial complete assignment for 
csp
 
for 
i
 
= 
1 to 
max_steps
 
do
  
if
 
current
 is a solution for 
csp
 then return 
current
  
var
 
  
a randomly chosen, conflicted variable from VARIABLES[
csp
]
  
value
  
  
the value 
v
 for 
var
 that minimize
CONFLICTS(
var,v,current,csp
)
  
set 
var = value
 in 
current
 
return 
failure
Advantages of local search
Local search can be particularly useful in an online setting
Airline schedule example
E.g., mechanical problems require than 1 plane is taken out of
service
Can locally search for another “close” solution in state-space
Much better (and faster) in practice than finding an entirely new
schedule.
The runtime of min-conflicts is roughly independent of problem size.
Can solve the millions-queen problem in roughly 50 steps.
Why?
n-queens is easy for local search because of the relatively high
density of solutions in state-space.
Graph structure and problem complexity
Divide-and-conquer: Solving
disconnected subproblems.
Suppose each subproblem has 
c
variables out of a total of 
n
.
Worst case solution cost is 
O(n/c d
c
)
,
i.e. linear in 
n
Instead of 
O(d 
n
),
 exponential in 
n
E.g. 
n= 80, c= 20, d=2
2
80
 = 4 billion years at 1 million
nodes/sec.
4 * 2
20
= .4 second at 1 million
nodes/sec
Tree-structured CSPs
Theorem:
if a constraint graph has no loops then the CSP can be solved in
O(nd 
2
)
 time
linear in the number of variables!
Compare difference with general CSP, where worst case is 
O(d 
n
)
Algorithm for Solving Tree-structured CSPs
Choose some variable as root, order variables from root to leaves such
that every node’s parent precedes it in the ordering.
Label variables from 
X
1
 to 
X
n
.
Every variable now has 1 parent
Backward Pass
For 
j
 from 
n
 down to 2, apply arc consistency to arc [Parent(
X
j
), 
X
j
) ]
Remove values from Parent(
X
j
) if needed to make graph 
directed arc
consistent.
Forward Pass
For 
j
 from 1 to 
n
 assign 
X
j
 consistently with Parent(
X
j 
)
Tree CSP Example
G
B
Tree CSP Example
B
R
G
B
G
B
R
G
R
G
B
Backward Pass
(constraint
propagation)
Tree CSP Example
B
R
G
B
G
B
R
G
R
G
B
B
 
 
G
R
G
B
R
Forward Pass
(assignment)
Backward Pass
(constraint
propagation)
Tree CSP complexity
Backward pass
n arc checks
Each has complexity d
2
 at worst
Forward pass
n variable assignments, O(nd)
Overall complexity is 
O(nd 
2
)
Algorithm works because if the backward pass succeeds, then every variable
by definition has a legal assignment in the forward pass
What about non-tree CSPs?
General idea is to convert the graph to a tree
2 general approaches
1.
Assign values to specific variables (Cycle Cutset method).
1.
Tries to exploit 
context-specific
 independence.
1.
Construct a 
tree-decomposition 
of the graph
- Connected subproblems (subgraphs) becomes nodes in a
tree structure.
Cycle-cutset conditioning
Choose a subset S of variables from the graph so that
graph without S is a tree
S = “cycle cutset”
For each possible consistent assignment for S
Remove any inconsistent values from remaining variables that
are inconsistent with S
Use tree-structured CSP to solve the remaining tree-structure
If it has a solution, return it along with S
If not, continue to try other assignments for S
undefined
Finding the optimal cutset
If c is small, this technique works very well
However, finding smallest cycle cutset is NP-hard
But there are good approximation algorithms
Tree Decompositions
Red, green, blue
Red, blue, green,
blue, red, green
Red, green, blue
Red, blue, green,
blue, red, green
Rules for a Tree Decomposition
Every variable appears in at least one of the
subproblems.
If two variables are connected in the original
problem, they must appear together (with the
constraint) in at least one subproblem.
If a variable appears in two subproblems, it must
appear in each node on the path between them.
Tree Decomposition Algorithm
View each subproblem as a “super-variable”
Domain = set of solutions for the subproblem
Obtained by running a CSP on each subproblem.
Maximum-size of subproblem - 1 = 
treewidth of constraint graph.
E.g., 6 solutions for 3 fully connected variables in map problem
Now use the tree CSP algorithm to solve the constraints
connecting the subproblems
Declare a subproblem a root node, create tree
Backward and forward passes
Example of “divide and conquer” strategy
Tree Decomposition
Every
 graph has a tree decomposition, not just
constraint graphs.
Tree decomposition shows optimal divide-and-
conquer strategy.
For CSPs, optimization, search, inference.
Typical result: if treewidth of a problem graph is
constant, then the search/optimization/inference
problem is solvable by divide and conquer.
Treewidth in General
Summary
CSPs
 special kind of problem: states defined by values of a fixed set of variables, goal test
defined by constraints on variable values: 
factored representation
.
Represents many practical problems 
Microsoft Research.
 
University of Essex.
Backtracking=depth-first search with one variable assigned per node
Heuristics
Variable ordering and value selection heuristics help significantly
Constraint propagation 
does additional work to constrain values and detect
inconsistencies
Works effectively when combined with heuristics
Iterative min-conflicts is often effective in practice.
Graph structure of CSPs determines problem complexity
e.g., tree structured CSPs can be solved in linear time.
Slide Note

Show pits breeze CSP as model for logic class.

Discuss Fields summer school.

Try CSP demo from book website

Embed
Share

Exploring Constraint Satisfaction Problems (CSPs) in AI involves topics like CSP definition, arc consistency, backtracking search, problem decomposition, local search, and more. A CSP is defined by variables and domains with a goal test formed by constraints. This field offers powerful algorithms with capabilities akin to Turing Machines, providing a versatile framework for problem-solving in AI.

  • Constraint Satisfaction
  • CSP Definition
  • Backtracking Search
  • Problem Decomposition
  • Local Search

Uploaded on Sep 20, 2024 | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. Download presentation by click this link. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

E N D

Presentation Transcript


  1. Constraint Satisfaction Problems CHAPTER 6 OLIVER SCHULTE SUMMER2011

  2. Midterm Announcements Please bring ID to the exam. You can bring a cheat sheet. Chapters 1-6 covered, see Lecture Schedule. CSP definition is covered, also arc consistency. Not covered: CSP search methods (topic today). No calculators, smartphones, textbook, notes. Be on time. Read the instructions ahead of time posted on the web.

  3. Talk Announcement CS SEMINAR --------------- June 23rd, 2:30 p.m., TASC1 9204W Generalized Planning: Theory and Practice Hector J. Levesque ABSTRACT: While most of the research in automated planning within AI has focussed on methods for finding sequential (or straight-line) plans, there is growing interest in a more general account, where the plans may need to contain branches and loops. In this talk, I will present some recent work in this area, including some new theoretical results, as well as a description of a new planning system called FSAPLANNER and some of the generalized planning problems it can solve.

  4. Outline CSP examples Backtracking search for CSPs Problem structure and problem decomposition Local search for CSPs Field Institute Summer Workshops

  5. Environment Type Discussed In this Lecture 5 Fully Observable Static Environment yes Deterministic yes Sequential no yes Discrete no Discrete yes no yes Continuous Function Optimization Planning, heuristic search Control, cybernetics Vector Search: Constraint Satisfaction CMPT 310 - Blind Search

  6. Agent Architecture Discussed In this Lecture A model is a structured representation of the world. Graph-Based Search: State is black box, no internal structure, atomic. Factored Representation: State is list or vector of facts. CSP: a fact is of the form Variable = value .

  7. Constraint satisfaction problems (CSPs) CSP: state is defined by variables Xi with values from domain Di goal test is a set of constraints specifying allowable combinations of values for subsets of variables. Allows useful general-purpose algorithms with more power than standard search algorithms. Power close to simulating Turing Machines.

  8. Example: Map-Coloring

  9. CSPs (continued) An assignment is complete when every variable is mentioned. A solution to a CSP is a complete assignment that satisfies all constraints. Some CSPs require a solution that maximizes an objective function. Constraints with continuous variables are common. Linear Constraints linear programming. Examples of Applications: Airline schedules Final Exam Scheduling. Cryptography Sudoku, cross-words.

  10. Example: Map-Coloring contd.

  11. Varieties of constraints Unary constraints involve a single variable, e.g., SA 6= green Binary constraints involve pairs of variables, e.g., SA <> WA Higher-order constraints involve 3 or more variables Preferences (soft constraints), e.g., red is better than green often representable by a cost for each variable assignment constrained optimization problems

  12. Constraint graph Binary CSP: each constraint relates at most two variables Constraint graph: nodes are variables, arcs show constraints General-purpose CSP algorithms use the graph structure to speed up search. E.g., Tasmania is an independent subproblem!

  13. Graphs and Factored Representations UBC AI Space CSP Graphs for variables (concepts, facts) capture local dependencies between variables (concepts, facts). Absence of edges = independence. AI systems try to reason locally as much as possible. Potential Solution to the Relevance Problem: How does the brain retrieve relevant facts in a given situation, out of the million facts that it knows? Answer: Direct links represent direct relevance. Computation in general is a local process operating on a factored state. (What is the state of a program run?)

  14. Problem b Consider the constraint graph on the right. e a c The domain for every variable is [1,2,3,4]. There are 2 unary constraints: - variable a cannot take values 3 and 4. - variable b cannot take value 4. d There are 8 binary constraints stating that variables connected by an edge cannot have the same value.

  15. Example: 4-Queens Problem X1 X2 {1,2,3,4} {1,2,3,4} 1 2 3 4 1 2 3 4 X3 X4 {1,2,3,4} {1,2,3,4} 5 Queen's Problem

  16. Arc consistency An Arc X Y is consistent if for every value x of X there is some value y consistent with x (note that this is a directed property) Consider state of search after WA and Q are assigned: SA NSW is consistent if SA=blue and NSW=red

  17. Arc consistency X Y is consistent if for every value x of X there is some value y consistent with x NSW SA is consistent if NSW=red and SA=blue NSW=blue and SA=???

  18. Arc consistency Can enforce arc-consistency: Arc can be made consistent by removing blue from NSW Continue to propagate constraints . Check V NSW Not consistent for V = red Remove red from V

  19. Arc consistency Continue to propagate constraints . SA NT is not consistent and cannot be made consistent

  20. Standard search formulation (incremental) Let s formulate a state graph problem, then take advantage of special structure later. States are defined by the values assigned so far Initial state: the empty assignment, { } Successor function: assign a value to an unassigned variable that does not conflict with current assignment. fail if no legal assignments (not fixable!) Goal test: the current assignment is complete This is the same for all CSPs!

  21. Standard search formulation (incremental) Can we use breadth first search? Branching factor at top level? nd any of the d values can be assigned to any variable Next level? (n-1)d We generate n!.dn leaves even though there are dn complete assignments. Why? Commutatively If the order of applications on any given set of actions has no effect on the outcome.

  22. Backtracking search Variable assignments are commutative, i.e., [WA=red then NT =green] same as [NT =green thenWA=red] Only need to consider assignments to a single variable at each node b=d and there are dn leaves Depth-first search for CSPs with single-variable assignments is called backtracking search Is this uninformed or informed? Backtracking search is the basic uninformed algorithm for CSPs

  23. Backtracking example 23 4 Feb 2004 CS 3243 - Constraint Satisfaction

  24. Backtracking example 24 4 Feb 2004 CS 3243 - Constraint Satisfaction

  25. Backtracking example 25 4 Feb 2004 CS 3243 - Constraint Satisfaction

  26. Backtracking example 26 4 Feb 2004 CS 3243 - Constraint Satisfaction

  27. Improving backtracking efficiency 28 General-purpose methods can give huge gains in speed: Which variable should be assigned next? In what order should its values be tried? Can we detect inevitable failure early? Constraint Learning: Can we keep track of what search has learned? Can we take advantage of problem structure? 4 Feb 2004 CS 3243 - Constraint Satisfaction

  28. Most constrained variable 29 Most constrained variable: choose the variable with the fewest legal values a.k.a. minimum remaining values (MRV) heuristic Only picks a variable (Not a value) Demo for MRV 4 Feb 2004 CS 3243 - Constraint Satisfaction

  29. Most constraining variable 30 How to choose between the variable with the fewest legal values? Tie-breaker among most constrained variables Degree heuristic: choose the variable with the most constraints on remaining variables 4 Feb 2004 CS 3243 - Constraint Satisfaction

  30. Least constraining value 31 Given a variable, choose the least constraining value: the one that rules out the fewest values in the remaining variables. Intuition: choose most likely solution. Combining these heuristics makes 1000 queens feasible 4 Feb 2004 CS 3243 - Constraint Satisfaction

  31. Constraint propagation 36 NT and SA cannot both be blue! Constraint propagation repeatedly enforces constraints locally. Use arc-consistency. Inference complements search. Has to be faster than searching 4 Feb 2004 CS 3243 - Constraint Satisfaction

  32. Example: 4-Queens Problem X1 X2 {1,2,3,4} {1,2,3,4} 1 2 3 4 1 2 3 4 X3 X4 {1,2,3,4} {1,2,3,4}

  33. Example: 4-Queens Problem X1 X2 {1,2,3,4} {1,2,3,4} 1 2 3 4 1 2 3 4 X3 X4 {1,2,3,4} {1,2,3,4}

  34. Example: 4-Queens Problem X1 X2 {1,2,3,4} { , ,3,4} 1 2 3 4 1 2 3 4 X3 X4 { ,2, ,4} { ,2,3, }

  35. Example: 4-Queens Problem X1 X2 {1,2,3,4} { , ,3,4} 1 2 3 4 1 2 3 4 X3 X4 { ,2, ,4} { ,2,3, }

  36. Example: 4-Queens Problem X1 X2 {1,2,3,4} { , ,3,4} 1 2 3 4 1 2 3 4 X3 X4 { , , , } { , ,3, }

  37. Example: 4-Queens Problem X1 X2 {1,2,3,4} { , , ,4} 1 2 3 4 1 2 3 4 X3 X4 { ,2, ,4} { ,2,3, }

  38. Example: 4-Queens Problem X1 X2 {1,2,3,4} { , , ,4} 1 2 3 4 1 2 3 4 X3 X4 { ,2, ,4} { ,2,3, }

  39. Example: 4-Queens Problem X1 X2 {1,2,3,4} { , , ,4} 1 2 3 4 1 2 3 4 X3 X4 { ,2, , } { , ,3, }

  40. Example: 4-Queens Problem X1 X2 {1,2,3,4} { , , ,4} 1 2 3 4 1 2 3 4 X3 X4 { ,2, , } { , ,3, }

  41. Example: 4-Queens Problem X1 X2 {1,2,3,4} { , ,3,4} 1 2 3 4 1 2 3 4 X3 X4 { ,2, , } { , , , }

  42. Arc Consistency vs. Search Arc Consistency speeds up search, but is in itself not a complete search procedure. Example: Simple Problem 2 in AI space. Arc Consistency does not detect that the problem is insoluble.

  43. Arc consistency checking Can be run as a preprocessor or after each assignment Or as preprocessing before search starts AC must be run repeatedly until no inconsistency remains Trade-off Requires some overhead to do, but generally more effective than direct search In effect it can eliminate large (inconsistent) parts of the state space more effectively than search can Need a systematic method for arc-checking If X loses a value, neighbors of X need to be rechecked.

  44. Arc consistency checking

  45. Arc-consistency as message-passing This is a propagation algorithm. It s like sending messages to neighbors on the graph. How do we schedule these messages? Every time a domain changes, all incoming messages need to be re-sent. Repeat until convergence no message will change any domains. Since we only remove values from domains when they can never be part of a solution, an empty domain means no solution possible at all back out of that branch.

  46. Example 5-queen s problem in AIspace. Apply the heuristics: Use variables with minimum remaining values. Use variable with maximum degree. Use least constraining value. Use arc-consistency after choosing each value.

  47. Back-tracking or back-jumping? {Q=red , NSW= green, V= blue, T=red} red green ? blue red green blue

  48. Local search for CSPs Use complete-state representation Initial state = all variables assigned values Successor states = change 1 (or more) values For CSPs allow states with unsatisfied constraints (unlike backtracking) operators reassign variable values hill-climbing with n-queens is an example Variable selection: randomly select any conflicted variable. Local Stochastic Search Demo Value selection: min-conflicts heuristic Select new value that results in a minimum number of conflicts with the other variables

  49. Min-conflicts example 1 h=5 h=3 h=1 Use of min-conflicts heuristic in hill-climbing.

More Related Content

giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#