Generation of Heuristics & A* Variations
This content delves into the generation of heuristics and variations of the A* search algorithm in the realm of Artificial Intelligence. It covers topics such as yielding better heuristics, deriving heuristics through relaxation, admissibility, consistency, heuristics from formal problem specifications, and utilizing multiple heuristics effectively. Practical examples and concepts are illustrated to enhance understanding.
Download Presentation

Please find below an Image/Link to download the presentation.
The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.
You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.
The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.
E N D
Presentation Transcript
Lecture notes for Principles of Artificial Intelligence (COMS 4720/5720) Yan-Bin Jia, Iowa State University Generation of Heuristics & A* Variations Outline I. Generating heuristics II. Variations of A* search * Figures are from the textbook site (or drawn by the instructor) unless the source is specifically cited.
I. Better Heuristics Given two admissible heuristic functions 1 and 2, we say 2dominates 1 if 2(?) 1(?) at every node ?. If 2 dominates 1, A* using 2 will not expand more nodes than using 1. A* expands every node ? with ? ? = ? ? + (?) < ? (optimal cost). Suppose that a node ? is expanded by A* with 2, and ? ? + 2? < ? . ? ? + 1? < ? 1? 2? ? ? + 1? ? ? + 2(?) The node ? will be expanded by A* with 1. 1 might cause other nodes to be expanded as well.
Generating Heuristics by Relaxation An admissible heuristic can be derived from exact solution cost of a relaxed problem. with fewer restrictions on the actions Relaxation 1: A tile can move anywhere. 1= # tiles misplaced 1 would give the length of the shortest solution. Relaxation 2: A tile can move one square in any direction, even onto an occupied square. 2= sum of Manhattan distances of the tiles from their goal positions 2 would give the length of the shortest solution.
Admissibility & Consistency A solution in the original problem is also a solution in the relaxed problem. But an optimal solution to the relaxed problem may be shorter than an optimal solution to the original problem. The cost of an optimal solution to the relaxed problem is an admissible heuristic for the original problem. Such heuristic is an exact cost for the relaxed problem. It must satisfy the triangle inequality. The heuristic is consistent.
Heuristics from Formal Specification Formal specification of a problem (8-puzzle): A tile can move from square ? to square ? if ? is adjacent to ? and ? is blank. Relaxation by removing one or two conditions 2 (Manhattan distance) (a) A tile can move from square ? to square ? if ? is adjacent to ?. (b) A tile can move from square ? to square ? if ? is blank. Allows decomposition of the problem into 8 independent subproblems, one for moving each tile. 1 (misplaced tiles) (c) A tile can move from square ? to square ?. The relaxed problems should be solved without search. Otherwise, evaluation of the corresponding heuristic will be expensive. Program ABSOLVER generates heuristics automatically from problem definitions, including the best one for the 8-puzzle and the first one for the Rubik s Cube puzzle.
Multiple Heuristics Available Admissible heuristics 1, 2, , ? are available but none is clearly better than the others. Use a composite heuristic: ? = max{ 1? , 2? , , ?? } is admissible because 1, 2, , ? are. dominates 1, 2, , ?. takes longer to compute at least ?(?). May randomly select one heuristic at each evaluation.
II. Sacrificing Search A* explores a lot of nodes due to equal weighting of ? and in ? = ? + which often distracts it from the optimal path. Can explore fewer nodes if we are okay with satisficing (suboptimal but good enough ) solutions . Use an inadmissible heuristic. Detour index: multiplier applied to the straight-line distance. e.g., a detour index of 1.3 implies a good estimate of 13 miles between two cities that are 10 miles apart.
Weighted A* Evaluation function: ?(?) = ?(?) + ? (?) for some ? > 1. A* search Weighted A* search (? = 2 on the same grid) Gray bars: obstacles Dots: reached states
Weighted A* As a Generalization Weighted A* finds a solution with cost between ? and ? ? . Cost is usually much closer to ? in practice. ? ? + ? (? = 1) A* search Uniform-cost search ? ? (? = 0) Best-cost search ? (? = ) Weighted A* search ? ? + ? ? (1 ? < )
Memory-Bounded Search Beam search keeps the ? nodes with the best ? scores. Less memory and faster execution. Incomplete and suboptimal. Example ? = 5 ? = 2 9 7 12 17 16 11 12 19
Iterative-Deepening A* Search (IDA*) The cutoff at each iteration is the ?-cost rather than depth. Determine the (new) cutoff value for the current iteration as ? = {? | node ? generated in the previous iteration and ? ? > old ?cutoff} new ??????? min ? ??(?) Example 5 ? 5 ? = 5 ? ? 9 6 ? ? 9 6 9 6 ? ? ? ? 14 10 9 ? ? 10 9 ? 13 12 ? ? ? Iter. 3 (???????= 9) Iter. 2 (???????= 6) Iteration 1 (???????= 5)
IDA* (contd) Steady progress towards the goal if ?-cost of every path is an integer. ? iterations In the worst case, #iterations = #states
Recursive Best-First Search (RBFS) ??????(?): ?-value of the best alternative path from any ancestor of the node ?. ? Best-first search if, at the currently visited node ?, it holds that ? ? ??????(?). ? ? ? ? = ??????(?) ? Otherwise (i.e., ? ? > ??????(?)), unwinds back to the alternative path ?; the visited nodes during the unwinding form a path ?. ? ? ? ? > ??????(?) set the ?-value of every node ? on ? to be the best ?- value of its children. Idea: Remembers the ?-value of the best leaf in the forgotten subtree in case it s worth a re-expansion at some later time.
RBFS Example ??????(Arad) ??????(Sibiu) 140 2 + 366 = = 140 + 90 + 176 > 415 415 (backed-up value from Pitesti) > 417
RBFS Example (contd) 415 > 417 415 417 (backed-up value from Bucharest)
RBFS Summary Optimal with admissible heuristic function (?). Space complexity ?(??). branching factor depth Time complexity difficult to analyze, depending on accuracy of (?) how often the best path changes Slightly more efficient than IDA*. Both IDA* and RBFS suffering from using too little memory and may explore the same state multiple times.