Understanding P, NP, NP-Hard, NP-Complete Problems and Amortized Analysis

Slide Note
Embed
Share

This comprehensive study covers P, NP, NP-Hard, NP-Complete Problems, and Amortized Analysis, including examples and concepts like Reduction, Vertex Cover, Max-Clique, 3-SAT, and Hamiltonian Cycle. It delves into Polynomial versus Non-Polynomial problems, outlining the difficulties and unsolvability of certain problems. The analysis also examines algorithms' efficiency in Polynomial time, classifying them as P Problems. The Konigsberg Bridge Problem is discussed along with NP-Problems, which take exponential time and can be verified in Non-deterministic Polynomial time.


Uploaded on Oct 03, 2024 | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. Download presentation by click this link. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

E N D

Presentation Transcript


  1. P, NP, NP-Hard, NP-Complete Problems and Amortized Analysis (Unit IV & V) Dr. S V Gumaste, Professor & Head, Department of IT, MET s Institute of Engineering, Bhujbal Knowledge City, Adgaon Nashik -03

  2. Covers: P, NP, NP-Hard, NP-Complete Problems with examples. Reduction Vertex Cover Problem Max-Clique Problem 3-SAT Problem Hamiltonian Cycle Amortized Analysis Binary, Binomial, Fibonacci Heap Splay Tree & Operations in Splay tree Randomized & Approximation Algorithms Embedded Algorithms & Sorting Algorithm for Embedded System. Time Space trade off

  3. General Idea: Polynomial : A polynomial is an expression that can be built from constants (like 3, 20, or ) and symbols called indeterminates or variables (like x and y) by means of addition, multiplication and exponentiation ((like the 2 in y2) to a non-negative integer power. Eg: -3?2+ 4? + 11 7?3 4?5? Non-Polynomial: The set or property of problems for which no polynomial-time algorithm is known (merely depends on guess). This includes problems for which the only known algorithms require a number of steps which increases exponentially with the size of the problem, and those for which no algorithm at all is known. Within these two there are problems which are "provably difficult" and "provably unsolvable". Eg: 5?3+3?-1/(3?2) (? +1)/?35? ( ) ?2- 3? 8 ?2/3 2 5 3 ?

  4. Comparison: Problems that yield solution (Algorithms that gives answer) in Polynomial time are classified as P Problems (Tractable/Deterministic Problems) Problems which can be solved in polynomial time, which take time like (n), (n^2), (n^3), (log n), (n*log n) . Eg: finding maximum element in an array or to check whether a string is palindrome or not. so there are many problems which can be solved in polynomial time.

  5. Konigsberg Bridge Problem Pass over all 7 bridges exactly once! Solution: (Euler's analysis) Convert this Problem into Graph theory. Land is denoted by vertices and bridges by edges. For Graph to be traversable, All vertices of Graph must be of EVEN degree OR Exactly two vertices are of ODD and rest must be EVEN! As all vertices of this graph are ODD degree, solution does not exist (Proved) and this itself is a solution for this problem! Every Problem in the word have a Solution too.

  6. NP- Problems Problems that take exponential time to yield solutions, these solutions can be verified by Non-deterministic Polynomial time are classified as NP- Problems (Non-deterministic/Non-tractable Problems). NP- Non deterministic Polynomial time solving. Problem which can't be solved in polynomial time, but are checkable in polynomial time means that given a solution of a problem , we can check that whether the solution is correct or not in polynomial time. Eg: Sudoku, TSP, Graph coloring, Hamiltonian cycle etc NP is set of decision problems that can be solved by a Non-deterministic Turing Machine. P is subset of NP. Informally, NP is set of decision problems which can be solved by a polynomial time via a Lucky Algorithm or Guess Algorithm , a magical algorithm that always makes a right guess among the given set of choices

  7. Reduction Reduction Let L1 and L2 be two decision problems. Suppose algorithm A2 solves L2. That is, if y is an input for L2 then algorithm A2 will answer Yes or No depending upon whether y belongs to L2 or not. The idea is to find a transformation from L1 to L2 so that the algorithm A2 can be part of an algorithm A1 to solve L1. If we can convert (reduce/transform) instance of a problem A into problem B (NP problem), the Algorithm of Problem B can be part of Algorithm of Problem A to solve Problem A. Means that A is reducible to B. Problem A Problem B.

  8. NP-complete hardest problems in NP set. decision problem L is NP-complete if: 1) L is in NP (Any given solution for NP-complete problems can be verified quickly, but there is no efficient known solution). problems are the A 2) Every problem in NP is reducible to L in polynomial time. P NP

  9. Vertex cover Problem: A vertex cover (sometimes node cover) of a graph is a set of vertices such that each edge of the graph is incident to at least one vertex of the set. A vertex cover of an undirected graph G(V,E) is a subset V V such that, if (u,v) E then u V or v V or both u, v V . Find set of minimum vertex that cover all edges in G(V,E). Solution: In a given graph, {B, D} covers all edges {<BA>, <BD>, <BC>, <DB>, <DC>,<DE>}. Solution is verified in Polynomial time. So it is NP. D A E B C

  10. Prove Vertex cover Problem is a NP Complete: Requirement: Vertex Cover Problem is NP Problem Vertex cover problem can be reduced to a NP Complete problem (Which has already solution). Eg: Vertex Cover problem can be reduced to Clique problem Steps: Already we proved Vertex Cover Problem is NP (1st Requirement is met). Vertex Cover Problem can be reduced to Clique Problem and hence we can say Vertex Cover Problem (2nd Requirement) is NP Complete.

  11. Complete Graph: If all vertices of a Graph G(V,E) are connected to each other, it is called Complete Graph. Property of Complete Graph: If n vertices are there, E = [n(n-1)]/2 For 2 vertices, E = [2*(2-1)]/2 = 1 For 3 vertices, E = [3*(3-1)]/2 = 3 For 4 vertices, E = [4*(4-1)]/2 = 6

  12. Clique Problem: T= (1.188)^n Clique is a complete subgraph of a graph. We need to take max Clique to prove Vertex cover problem. Clique Clique of size 4 Clique of size 3 Clique Clique of size 2 Vertex Cover Problem Max Clique Problem Clique Problem 3-SAT Problem Solving 3-SAT, we get solution of Clique hence for VC too.

  13. 3-SAT Problem: Clique Problem is reduced to 3-SAT Problem. Conjunctive Normal Formula (CNF) is a set of clause(s) with variables, operations ( , , or ~). f = (x1 x2) ( x1 x2) (x1 x3) < x2, 2> < x1, 2> < x1, 3> Clause-1 Clause-2 Clause-3 < x1, 1> Rules to draw CNF form to Graph: E = { (<a,Clause_i>, <b, Clause_j>) i j and a b} < x3, 3> < x2, 1> By observing the graph so drawn, we have clique of size 2 (Every edge of the graph); clique of size 3 (Red and Blue colored edges); Max clique of size 3 exists. Now let s take coordinates of RED triangle as 1 so, x1=0 as x1=1 , x2=x3=1 So, f = (0 1) (1 1) (0 1) = 1 [TRUE]. 3-SAT, Clique and Vertex Cover are NP Complete. Hence, Clique Problem can be solved, Vertex Cover Problem can also be solved.

  14. Another Example: A 3-variable Circuit Problem P1 can be reduced to 3-SAT Problem P2. 3-Variable Circuit Problem 3-SAT Problem Correctness of Reduction: Clearly reduction can be done in Polynomial time. C (Output of P1) is satisfiable iff X (Output of P2) is satisfiable. Then there exists a satisfiable assignment too.

  15. Hamiltonian Cycle: Hamiltonian cycle is a cycle that return to starting vertex by visiting all vertices exactly once. A D A B D C F C E C B E C F D F C D A Time Comp = (2???) Hamiltonian Cycle: A B E F D A (One eg) Dead Ends A

  16. Amortized Analysis When one input affects the running time of subsequent inputs, we go for Amortized Analysis. Eg: Find series of kth smallest in an array of n elements (k = 5,23,57..). Need to sort an array, Lets take Quick sort (Worst case (n^2)) only first time. Directly pick kth element (Time complexity 1 unit) For m iterations, Time complexity = (n^2)+m*1 Average Time Complexity = [(n^2)+m*1]/m Most common techniques used in Amortized Analysis: Aggregate Analysis Accounting Analysis (Method). Potential Analysis (Method).

  17. Aggregate Analysis: It is average time complexity of all instructions of an Algorithm. Eg: Binary counter. Bit B[0] flips n times Bit B[1] flips n/2 times Bit B[2] flips n/4 times Bit B[3] flips n/8 times Total number of Flips = (n+n/2+n/4+n/8+ ) =n* ?=1 Amortized cost of each instruction = (n)/n = (1). Decimal Value Bit B3 Bit B2 Bit B1 Bit B0 Cumulative cost assuming 1 unit of time for each flips 0 0 0 0 0 0 1 0 0 0 1 1 2 0 0 1 0 3 3 0 0 1 1 4 4 0 1 0 0 7 5 0 1 0 1 8 6 0 1 1 0 10 7 0 1 1 1 11 ?=? 1 2?=2n = (n) 8 1 0 0 0 15

  18. Accounting Analysis Accounting method assigns different cost to different operations called Amortized cost. If assigned Amortized cost is more than actual cost, the difference is accumulated as credit. This credit is used in future where assigned cost is less that actual cost. Credit is always non-negative if ?=1 Where, ?? is credited cost, ??is actual cost. Eg: In our binary counter, let s assign Amortized cost of 2 for flip operation from 0 1 and cost of 1 units for flip operation from 0 1. Number of 1 s in binary counter is always greater than 0 or 1, credit is always non-negative. Amortized cost of each operation is = (n)/n = (1). ?=? ?? ?=1 ?=???

  19. Potential Method Potential method works in same way as that of accounting method, but have potential or energy to pay instead of credit. Eg: In our binary counter, we define a potential function indicating number of 1 s in a counter after ith operation. So we have at least ti+1 as potential energy in worst case. (i.e. from 0000 1111). ?=???+ ?? Amortized Cost = ?=1 As counter starts, all bits are 0 so ?1=0 and ?? 0 for all i. Hence cost of operation will be (k*n) = (n) ; k is accumulated potential energy and is constant. ?? 1

  20. Binary Heap Binomial Heap Fibonacci Heap Dijkstra s shortest path Algorithm

  21. Splay Tree: Splay tree is an another variant of binary search tree. Splay Tree is a self - adjusted Binary Search Tree in which every operation on element rearranges the tree so that the element is placed at the root position of the tree. In splay tree, every operation is performed at root of the tree. All the operations in splay tree are involved with a common operation called "Splaying".Splaying an element is the process of bringing it to the root position by performing suitable rotation operations. Three ways of rotation are: Zig-zag situation Zig-zig situation Zig situation

  22. Zig-zag situation The given node X [splay(X)] is the right child of left child or left child of a right child.

  23. Zig-zig situation The given node X [splay(X)] is the left child of left child or right child of a right child.

  24. Zig situation X [spay(X)] is just a child of the root!! Repeat the previous steps (Zig-zag or Zig-Zig) over and over until we get the root. We may end-up with left/right child of root, we make single right/left rotation accordingly.

  25. Insertion/Deletion operations in splay tree: The insertion operation in Splay tree is performed using following steps... 1 - Check whether tree is Empty. 2 - If tree is Empty then insert the new_Node as Root node and exit from the operation. 3 - If tree is not Empty then insert the new_Node as leaf node using Binary Search tree insertion logic. 4 - After insertion, Splay the new_Node The deletion operation in splay tree is similar to deletion operation in Binary Search Tree. But before deleting the element, we first need to splay that element and then delete it from the root position. Finally join the remaining tree using binary search tree logic.

  26. Randomized Algorithms: Deterministic Algorithms: Deterministic Algorithms executes specified instructions over give input and always gives correct output in polynomial time. Randomized Algorithms: In addition to input, algorithm takes a source of random numbers and makes random choice during execution. Behavior can vary even for fixed input over multiple executions. Formally, we think of a randomized algorithm as a machine M that computes M(x,r), where x is the problem input and r is the sequence of random bits. Our machine model is the usual Random-Access Machine or RAM model. Output Deterministic Algorithm Input Output Randomized Algorithm Input Random Number

  27. Example Monte Carlo algorithm is a randomized algorithm whose output may be incorrect with a certain typically small probability. Two examples of such algorithms are Karger Stein algorithm and Monte Carlo algorithm for minimum Feedback arc set. Las Vegas algorithm is a randomized algorithm that always gives correct results; that is, it always produces the correct result or it informs about the failure. However, the runtime of a Las Vegas algorithm differs depending on the input. It s easy and convenient to compute a random point in a circle with a Las Vegas algorithm. The idea is to first generate a point (x, y), with -1 < x < 1 and -1 < y < 1. If this point happens to fall within the unit circle we keep it, otherwise we throw it away and try again. Monte Carlo method applied to approximating the value of . After placing 30,000 random points, the probability that the estimate for is within 0.07% of the actual value

  28. Approximate Algorithms: For NP-Hard problems, where the exact solution may not be obtainable in a feasible amount of computing time, one can get approximate solution (nearer to exact solution) using approximate algorithms. A feasible solution very close to optimal solution is called approximate solution and the algorithm that generates such approximate solution is called approximate algorithm. Given an optimization problem P, an algorithm A is said to be an approximation algorithm for P, if for any given instance I, it returns an approximate solution, that is a feasible solution. Absolute Approximation- A is an absolute approximation algorithm if there exists a constant k such that, for every instance I of P, Planar graph coloring. Relative Approximation- A is an relative approximation algorithm if there exists a constant k such that, for every instance I of P, max ( Example Vertex cover. ? ? ? ? k. For example, ? ? ? ? ? ?) ( ? ?) k.

  29. Embedded system Algorithms: An embedded system is a controller programmed and controlled by a real- time operating system (RTOS) with a dedicated function within a larger mechanical or electrical system, often with real-time computing constraints. It is embedded as part of a complete device often including hardware and mechanical parts. Embedded systems control many devices in common use today. Ninety-eight percent of all microprocessors manufactured are used in embedded systems. In computing, scheduling is the method by which work is assigned to resources that complete the work. The work may be virtual computation elements such as threads, processes or data flows, which are in turn scheduled onto hardware resources such as processors, network links or expansion cards. A scheduler is what carries out the scheduling activity. Schedulers are often implemented so they keep all computer resources busy (as in load balancing), allow multiple users to share system resources effectively, or to achieve a target quality of service.

  30. sorting algorithms embedded systems Characteristics of Optimal sorting algorithm for embedded systems: It must sort in place- In general dynamic memory allocation should be avoided in embedded systems because of problems with heap fragmentation and allocation performance. The algorithm must not be recursive-it s not fast and it can easily lead to problems of stack overflow. As a result, it should never be used in embedded systems Running Time Variability (Its best, average and worst case running times should be of similar magnitude)- As a result a function whose execution time varies enormously with the input data can often be problematic.. Its code size should be reasonable for the target system. Data Size Dependence-Its running time should increase linearly or logarithmically with the number of elements to be sorted. Its implementation must be clean i.e. free of breaks and returns in the middle of a loop .

  31. Time space tradeoff Algorithms with time complexity needs more space and vice versa. Example of recursive Factorial problem uses minimum space as it performs 2 computations in every call. Example of iterative Factorial problem uses minimum time as it stores value and re-assigns (f3=f1+f2; f1=f2; f2=f3) in every call.

  32. Thanks for silence listening.

Related