Understanding Simulated Annealing Algorithm: A Stochastic Local Search Approach

Slide Note
Embed
Share

Simulated Annealing Algorithm is a powerful optimization technique that helps prevent getting stuck in local minima during iterative improvement. By accepting uphill moves, changing neighborhood structures, and modifying objective functions strategically, simulated annealing allows exploring a broader solution space efficiently. Originating from statistical mechanics, it enables the acceptance of moves leading to worse solutions with decreasing probability over time, aiding in escaping local optima.


Uploaded on Sep 26, 2024 | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. Download presentation by click this link. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

E N D

Presentation Transcript


  1. Simulated Annealing Algorithm (Advanced Stochastic Local Search)

  2. Problem: Iterative Improvement stops at Local minima, which can be very poor . Strategies are required to prevent the search from getting trapped in local minima and to escape from them.

  3. Three basic ideas 1) Accept up-hill moves i.e., the search moves toward a solution with a worse objective function value Intuition: climb the hills and go downward in another direction 2) Change neighborhood structure during the search Intuition: different neighborhoods generate different search space topologies 3) Change the objective function so as to fill-in local minima Intuition: modify the search space with the aim of making more desirable not yet explored areas

  4. Simulated Annealing Simulated Annealing exploits the first idea: accept also up-hill moves. Origins in statistical mechanics (Metropolis algorithm) It allows moves resulting in solutions of worse quality than the current solution The probability of doing such a move is decreased during the search.

  5. Simulated Annealing Can be interpreted as a modified random descent in the space of solutions Choose a random neighbor Improving moves are always accepted Deteriorating moves are accepted with a probability that depends on the amount of the deterioration and on the temperature (a parameter that decreases with time) Can escape local optima 6

  6. Move Acceptance in SA We assume a minimization problem Set = Obj(random neighbor) Obj(current solution) If < 0 accept (we have an improving move) Else accept if ) 1 , 0 ( Random e t If the move is not accepted: try another random neighbor 7

  7. Convergence of simulated annealing AT INIT_TEMP Unconditional Acceptance Move accepted with probability = e-(^C/temp) HILL CLIMBING COST FUNCTION, C HILL CLIMBING HILL CLIMBING AT FINAL_TEMP NUMBER OF ITERATIONS

  8. Ball on terrain example SA vs Greedy Algorithms Initial position of the ball Simulated Annealing explores more. Chooses this move with a small probability (Hill Climbing) Greedy Algorithm gets stuck here! Locally Optimum Solution. Upon a large no. of iterations, SA converges to this solution.

  9. To accept or not to accept - SA? P = exp(-c/t) > r Where c is change in the evaluation function t the current temperature r is a random number between 0 and 1 Example

  10. To accept or not to accept - SA? Change Temp 0.2 0.4 0.6 0.8 exp(-C/T) Change Temp 0.2 0.4 0.6 0.8 exp(-C/T) 0.95 0.810157735 0.95 0.656355555 0.95 0.53175153 0.95 0.430802615 0.1 0.135335283 0.1 0.018315639 0.1 0.002478752 0.1 0.000335463

  11. Probability of accepting with high temperature Probability of accepting with low temperature Change in Evaluation Function Change in Evaluation Function Temperature of System Temperature of System exp(-C/T) 100 0.904837418 100 0.818730753 100 0.740818221 100 0.670320046 100 0.60653066 100 0.548811636 100 0.496585304 100 0.449328964 100 0.40656966 100 0.367879441 100 0.332871084 100 0.301194212 100 0.272531793 100 0.246596964 100 0.22313016 100 0.201896518 100 0.182683524 100 0.165298888 100 0.149568619 100 0.135335283 exp(-C/T) 0.367879441 0.135335283 0.049787068 0.018315639 0.006737947 0.002478752 0.000911882 0.000335463 0.00012341 4.53999E-05 1.67017E-05 6.14421E-06 2.26033E-06 8.31529E-07 3.05902E-07 1.12535E-07 4.13994E-08 1.523E-08 5.6028E-09 2.06115E-09 10 20 30 40 50 60 70 80 90 100 110 120 130 140 150 160 170 180 190 200 10 20 30 40 50 60 70 80 90 100 110 120 130 140 150 160 170 180 190 200 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 SImulated Annealing Acceptance Probability 1 Probability of Acceptance 0.9 0.8 0.7 0.6 Temp = 100 0.5 Temp = 10 0.4 0.3 0.2 0.1

  12. SA - Structure Initial temperature t0 high (if random walk) Reduce t regularly need a cooling schedule if too fast stop in some local optimum too early if too slow too slow convergence Might restart Choice of neighborhood structure is important 13

  13. SA Statistical guarantee that SA finds the global optimum In practice this requires exponential (or ) running time The cooling schedule is vitally important Much research on this Static schedules: specified in advance Adaptive schedules: react to information from the search 14

  14. Simulated Annealing Step 1: Initialize Start with a random initial placement. Initialize a very high temperature . Step 2: Move Perturb the placement through a defined move. Step 3: Calculate score calculate the change in the score due to the move made. Step 4: Choose Depending on the change in score, accept or reject the move. The prob of acceptance depending on the current temperature . Step 5: Update and repeat Update the temperature value by lowering the temperature. Go back to Step 2. The process is done until Freezing Point is reached.

  15. 16

  16. Choice of Move in SA Modified Random Descent Select a random solution in the neighborhood Accept this Unconditionally if better than current With a certain, finite probability if worse than current The probability is controlled by a parameter called the temperature Can escape from local optima 17

  17. SA - Cooling e t Random Walk Random Descent t t = 0 18

  18. SA Overall Structure Set the initial value of the control variable t (t0) to a high value Do a certain number of iterations with the same temperature Then reduce the temperature Need a cooling schedule Stopping criterion e.g. minimum temperature Repetition is possible Solution quality and speed are dependent on the choices made Choice of neighborhood structure is important += ( ) i t t 1 i 19

  19. Statistical Analysis of SA Model: State transitions in the search space Transition probabilities [pij] (i,j are solutions) Only dependent on i and j: homogenous Markov chain If all the transition probabilities are finite, then the SA search will converge towards a stationary distribution, independent of the starting solution. When the temperature approaches zero, this distribution will approach a uniform distribution over the global optima Statistical guarantee that SA finds a global optimum But: exponential (or infinite) search time to guarantee finding the optimum 20

  20. SA in Practice (1) Heuristic algorithm Behaviour strongly dependent on the cooling schedule Theory: An exponential number of iterations at each temperature Practice: A large number of iterations at each temperature, few temperatures A small number of iterations at each temperature, many temperatures 21

  21. SA in Practice (2) Geometric chain ti+1= ti, i = 0, ,K <1 (0.8 - 0.99) Number of repetitions can be varied Adaptivity: Variable number of moves before the temperature reduction Necessary to experiment 22

  22. SA General Decisions Cooling Schedule Based on maximum difference in the objective function value of solutions, given a neighborhood Number of repetitions at each temperature Reduction rate, Adaptive number of repetitions more repetitions at lower temperatures number of accepted moves, but a maximum limit Very low temperatures are not necessary Cooling rate most important 23

  23. SA Problem Specific Decisons Important goals Response time Quality of the solution Important choices Search space Infeasible solutions should they be included? Neighborhood structure Move evaluation function Use of penalty for violated constraints Approximation if expensive to evaluate Cooling schedule 24

  24. SA Choice of Neighborhood Size Variation in size Topologi Symmetry Connectivity Every solution can be reached from all the others Topography Spikes, Plateaus, Deep local optima Move evaluation function How expensive is it to calculate ? 25

  25. SA - Speed Random choice of neighbor Reduction of the neighborhood Does not search through all the neighbors Cost of new candidate solution Difference without full evaluation Approximation (using surrogate functions) Move acceptance criterion Simplify 26

  26. SA Example: TSP Search space - (n-1)!/2 Neighborhood size: 2-opt: n(n-1)/2 Connected Simple representation of moves Natural cost function Difference in cost between solutions is easy to calculate Generalization: k-Opt 27

  27. SA Fine Tuning Test problems Test bench Visualization of solutions Values for cost / penalties temperature number / proportion of accepted move iterations / CPU time Depencies between the SA-parameters The danger of overfitting 28

  28. SA Summary Inspired by statistical mechanics - cooling Metaheuristic Local search Random descent Use randomness to escape local optima Simple and robust method Easy to get started Proof for convergence to the global optimum Worse than complete search In practise: Computationally expensive Fine tuning can give good results SA can be good where robust heuristics based on problem structure are difficult to make 29

  29. Variants of Simulated Annealing

  30. Conclusions Simulated Annealing algorithms are usually better than greedy algorithms, when it comes to problems that have numerous locally optimum solutions. Simulated Annealing is not the best solution to circuit partitioning or placement. Network flow approach to solving these problems functions much faster. Simulated Annealing guarantees a convergence upon running sufficiently large number of iterations.

Related