Oracle Turing Machines in Computational Complexity Theory

Slide Note
Embed
Share

The lecture delves into the concept of Oracle Turing Machines and their role in proving computational complexity results, such as the limitations of diagonalization in demonstrating P vs. NP. Oracle Turing Machines are defined as Turing Machines with access to a special query tape and states for oracle queries. The physical realization of an Oracle TM involves a subroutine that decides a specific language. Deterministic and nondeterministic Oracle TMs have key features used in diagonalization. These insights shed light on the power and limits of diagonalization in the realm of complexity theory.


Uploaded on Sep 28, 2024 | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. Download presentation by click this link. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

E N D

Presentation Transcript


  1. Computational Complexity Theory Lecture 7: Relativization; Space complexity Indian Institute of Science

  2. Power & limits of diagonalization Like in the proof of P EXP, can we use diagonalization to show P NP ? The answer is No, if one insists on using only the two features of diagonalization.

  3. Oracle Turing Machines Like in the proof of P EXP, can we use diagonalization to show P NP ? The answer is No, if one insists on using only the two features of diagonalization. Definition: Let L {0,1}* be a language. An oracle TM ML is a TM with a special query tape and three special states qquery, qyes and qno such that whenever the machine enters the qquery state, it immediately transits to qyes or qno depending on whether the string in the query tape belongs to L. (ML has oracle access to L)

  4. Oracle Turing Machines Like in the proof of P EXP, can we use diagonalization to show P NP ? The answer is No, if one insists on using only the two features of diagonalization. Think of physical realization of ML as a device with access to a subroutine that decides L. We don t count the time taken by the subroutine. query Decider for L ML

  5. Oracle Turing Machines Like in the proof of P EXP, can we use diagonalization to show P NP ? The answer is No, if one insists on using only the two features of diagonalization. Think of physical realization of ML as a device with access to a subroutine that decides L. We don t count the time taken by the subroutine. The transition table of MLdoesn t have any rule of the kind (qquery, b) (q, c, L/R).

  6. Oracle Turing Machines Like in the proof of P EXP, can we use diagonalization to show P NP ? The answer is No, if one insists on using only the two features of diagonalization. Think of physical realization of ML as a device with access to a subroutine that decides L. We don t count the time taken by the subroutine. We can define a nondeterministic Oracle TM similarly.

  7. Oracle Turing Machines Like in the proof of P EXP, can we use diagonalization to show P NP ? The answer is No, if one insists on using only the two features of diagonalization. Important (deterministic/nondeterministic) have the same two features used in diagonalization: For any fixed L {0,1}*, 1. There s an efficient universal TM with oracle access to L, 2. Every ML has infinitely many representations. note: Oracle TMs

  8. Relativization

  9. Complexity classes using oracles Like in the proof of P EXP, can we use diagonalization to show P NP ? The answer is No, if one insists on using only the two features of diagonalization. Definition: Let L {0,1}* be a language. Complexity classes PL, NPL and EXPL are defined just as P, NP and EXP respectively, but with TMs replaced by oracle TMs with oracle access to L in the definitions of P, NP and EXP respectively.

  10. Complexity classes using oracles Like in the proof of P EXP, can we use diagonalization to show P NP ? The answer is No, if one insists on using only the two features of diagonalization. Definition: Let L {0,1}* be a language. Complexity classes PL, NPL and EXPL are defined just as P, NP and EXP respectively, but with TMs replaced by oracle TMs with oracle access to L in the definitions of P, NP and EXP respectively. SAT PSAT

  11. Relativizing results Like in the proof of P EXP, can we use diagonalization to show P NP ? The answer is No, if one insists on using only the two features of diagonalization. Observation: Let L {0,1}* be an arbitrarily fixed language. Owing to the Importantnote , the proof of P EXP can be easily adapted to prove PL EXPL by working with TMs with oracle access to L. We say that the P EXP result relativizes.

  12. Relativizing results Like in the proof of P EXP, can we use diagonalization to show P NP ? The answer is No, if one insists on using only the two features of diagonalization. Observation: Let L {0,1}* be an arbitrarily fixed language. Owing to the Importantnote , any proof/result that uses only the two features of diagonalization relativizes.

  13. Relativizing results Like in the proof of P EXP, can we use diagonalization to show P NP ? The answer is No, if one insists on using only the two features of diagonalization. Is it true that - either PL = NPL for every L {0,1}*, - or PL NPL for every L {0,1}* ?

  14. Relativizing results Like in the proof of P EXP, can we use diagonalization to show P NP ? The answer is No, if one insists on using only the two features of diagonalization. Is it true that - either PL = NPL for every L {0,1}*, - or PL NPL for every L {0,1}* ? Theorem (Baker-Gill-Solovay): The answer is No. Any proof of P = NP or P NP must not relativize.

  15. Relativizing results Like in the proof of P EXP, can we use diagonalization to show P NP ? The answer is No, if one insists on using only the two features of diagonalization. Is it true that - either PL = NPL for every L {0,1}*, - or PL NPL for every L {0,1}* ? Theorem (Baker-Gill-Solovay): The answer is No. Any proof of P = NP or P NP must not relativize.

  16. Baker-Gill-Solovay theorem Theorem: There exist languages A and B such that PA = NPA but PB NPB. Proof: Using diagonalization!

  17. Baker-Gill-Solovay theorem Theorem: There exist languages A and B such that PA = NPA but PB NPB. Proof: Let A = {(M, x,1m): M accepts x in 2m steps}. A is an EXP-complete language under poly-time Karp reduction. (simple exercise) Then, PA = EXP. Also, NPA = EXP. Hence PA = NPA.

  18. Baker-Gill-Solovay theorem Theorem: There exist languages A and B such that PA = NPA but PB NPB. Proof: Let A = {(M, x,1m): M accepts x in 2m steps}. A is an EXP-complete language under poly-time Karp reduction. (simple exercise) Then, PA = EXP. Also, NPA = EXP. Hence PA = NPA. Why isn t EXPA = EXP ?

  19. Baker-Gill-Solovay theorem Theorem: There exist languages A and B such that PA = NPA but PB NPB. Proof: For any language B let LB = {1n : there s a string of length n in B}.

  20. Baker-Gill-Solovay theorem Theorem: There exist languages A and B such that PA = NPA but PB NPB. Proof: For any language B let LB = {1n : there s a string of length n in B}. Observe, LB NPB for any B. (Guess the string, check if it has length n, and ask oracle B to verify membership.)

  21. Baker-Gill-Solovay theorem Theorem: There exist languages A and B such that PA = NPA but PB NPB. Proof: For any language B let LB = {1n : there s a string of length n in B}. Observe, LB NPB for any B. We ll construct B (using diagonalization) in such a way that LB PB, implying PB NPB.

  22. Constructing B We ll construct B in stages, starting from Stage 1. Each stage determines the status of finitely many strings. In Stage i, we ll ensure that the oracle TM MiBdoesn t decide 1n correctly (for some n) within 2n/10 steps. Moreover, n will grow monotonically with stages.

  23. Constructing B We ll construct B in stages, starting from Stage 1. Each stage determines the status of finitely many strings. In Stage i, we ll ensure that the oracle TM MiBdoesn t decide 1n correctly (for some n) within 2n/10 steps. Moreover, n will grow monotonically with stages. The machine with oracle access to B that is represented by i whether or not a string belongs to B

  24. Constructing B We ll construct B in stages, starting from Stage 1. Each stage determines the status of finitely many strings. In Stage i, we ll ensure that the oracle TM MiBdoesn t decide 1n correctly (for some n) within 2n/10 steps. Moreover, n will grow monotonically with stages. Clearly, a B satisfying the above implies LB PB. Why?

  25. Constructing B We ll construct B in stages, starting from Stage 1. Each stage determines the status of finitely many strings. In Stage i, we ll ensure that the oracle TM MiBdoesn t decide 1n correctly (for some n) within 2n/10 steps. Moreover, n will grow monotonically with stages. Stage i: Choose n larger than the length of any string whose status has already been decided. Simulate MiB on input 1n for 2n/10 steps.

  26. Constructing B We ll construct B in stages, starting from Stage 1. Each stage determines the status of finitely many strings. In Stage i, we ll ensure that the oracle TM MiBdoesn t decide 1n correctly (for some n) within 2n/10 steps. Stage i: If MiB queries oracle B with a string whose status has already been decided, answer consistently. If MiB queries oracle B with a string whose status has not been decided yet, answer No .

  27. Constructing B We ll construct B in stages, starting from Stage 1. Each stage determines the status of finitely many strings. In Stage i, we ll ensure that the oracle TM MiBdoesn t decide 1n correctly (for some n) within 2n/10 steps. Stage i: If MiB outputs 1 within 2n/10 steps then don t put any string of length n in B. If MiB outputs 0 or doesn t halt, put a string of length n in B. (This is possible as the status of at most 2n/10 many length n strings have been decided during the simulation)

  28. Constructing B We ll construct B in stages, starting from Stage 1. Each stage determines the status of finitely many strings. In Stage i, we ll ensure that the oracle TM MiBdoesn t decide 1n correctly (for some n) within 2n/10 steps. Homework: In fact, we can assume that B EXP.

  29. Space Complexity

  30. Space bounded computation Here, we are interested to find out how much of work space is required to solve a problem. For convenience, think of TMs with a separate input tape and one or more work tapes. Work space is the number of cells in the work tapes of a TM M visited by M s heads during a computation.

  31. Space bounded computation Here, we are interested to find out how much of work space is required to solve a problem. For convenience, think of TMs with a separate input tape and one or more work tapes. Work space is the number of cells in the work tapes of a TM M visited by M s heads during a computation. Definition. Let S: N is in DSPACE(S(n)) if there s a TM M that decides L using O(S(n)) work space on inputs of length n. N be a function. A language L

  32. Space bounded computation Here, we are interested to find out how much of work space is required to solve a problem. For convenience, think of TMs with a separate input tape and one or more work tapes. Work space is the number of cells in the work tapes of a TM M visited by M s heads during a computation. Definition. Let S: N is in NSPACE(S(n)) if there s a NTM M that decides L using O(S(n)) work space on inputs of length n, regardless of M s nondeterministic choices. N be a function. A language L

  33. Space bounded computation Here, we are interested to find out how much of work space is required to solve a problem. For convenience, think of TMs with a separate input tape and one or more work tapes. Work space is the number of cells in the work tapes of a TM M visited by M s heads during a computation. We ll simply refer to workspace as space . For convenience, assume there s a single work tape.

  34. Space bounded computation Here, we are interested to find out how much of work space is required to solve a problem. For convenience, think of TMs with a separate input tape and one or more work tapes. Work space is the number of cells in the work tapes of a TM M visited by M s heads during a computation. Definition. Let S: N constructible if there s a TM that computes S(|x|) from x using O(S(|x|)) space. N be a function. S is space

  35. Relation between time and space Obs. DTIME(S(n)) DSPACE(S(n)) NSPACE(S(n)). Theorem. NSPACE(S(n)) DTIME(2O(S(n))), if S is space constructible. Proof. Uses the notion of configuration graph of a TM. We ll see this shortly.

  36. Relation between time and space Obs. DTIME(S(n)) DSPACE(S(n)) NSPACE(S(n)). Theorem. NSPACE(S(n)) DTIME(2O(S(n))), if S is space constructible. Definition. L = DSPACE(log n) NL = NSPACE(log n) PSPACE = DSPACE(nc) c > 0

  37. Relation between time and space Obs. DTIME(S(n)) DSPACE(S(n)) NSPACE(S(n)). Theorem. NSPACE(S(n)) DTIME(2O(S(n))), if S is space constructible. Definition. L = DSPACE(log n) NL = NSPACE(log n) PSPACE = DSPACE(nc) c > 0 Giving space at least log n gives a TM at least the power to remember the index of a cell.

  38. Relation between time and space Obs. DTIME(S(n)) DSPACE(S(n)) NSPACE(S(n)). Theorem. NSPACE(S(n)) DTIME(2O(S(n))), if S is space constructible. Theorem. L NL P NP PSPACE EXP Run through all certificate choices of the verifier and reuse space.

  39. Relation between time and space Obs. DTIME(S(n)) DSPACE(S(n)) NSPACE(S(n)). Theorem. NSPACE(S(n)) DTIME(2O(S(n))), if S is space constructible. Theorem. L NL P NP PSPACE EXP Follows from the above theorem

  40. Relation between time and space Obs. DTIME(S(n)) DSPACE(S(n)) NSPACE(S(n)). Theorem. NSPACE(S(n)) DTIME(2O(S(n))), if S is space constructible. EXP PSPACE NP co-NP P NL L

  41. Configuration graph Definition. A configuration of a TM M on input x, at any particular step of its execution, consists of (a) the nonblank symbols of its work tapes, (b) the current state, (c) the current head positions. It captures a snapshot of M at any particular moment of execution.

  42. Configuration graph Definition. A configuration of a TM M on input x, at any particular step of its execution, consists of (a) the nonblank symbols of its work tapes, (b) the current state, (c) the current head positions. It captures a snapshot of M at any particular moment of execution. Input head index Work tape head index b1 bS(n) State info A Configuration C Content of work tape

  43. Configuration graph Definition. A configuration of a TM M on input x, at any particular step of its execution, consists of (a) the nonblank symbols of its work tapes, (b) the current state, (c) the current head positions. It captures a snapshot of M at any particular moment of execution. Input head index Work tape head index b1 bS(n) State info Note: A configuration C can be represented using O(S(n)) bits if M uses S(n) log n space on n-bit inputs.

  44. Configuration graph Definition. A configurationgraph of a TM M on input x, denoted GM,x, is a directed graph whose nodes are all the possible configurations of M on input x. There s an edge from one configuration C1 to another C2, if C2 can be reached from C1 by an application of M s transition function(s).

  45. Configuration graph Definition. A configurationgraph of a TM M on input x, denoted GM,x, is a directed graph whose nodes are all the possible configurations of M on input x. There s an edge from one configuration C1 to another C2, if C2 can be reached from C1 by an application of M s transition function(s). Number of nodes in GM,x = 2O(S(n)), if M uses S(n) space on n-bit inputs

  46. Configuration graph Definition. A configurationgraph of a TM M on input x, denoted GM,x, is a directed graph whose nodes are all the possible configurations of M on input x. There s an edge from one configuration C1 to another C2, if C2 can be reached from C1 by an application of M s transition function(s). If M is a DTM then every node C in GM,x has at most one outgoing edge. If M is an NTM then every node C in GM,x has at most two outgoing edges.

  47. Configuration graph Definition. A configurationgraph of a TM M on input x, denoted GM,x, is a directed graph whose nodes are all the possible configurations of M on input x. There s an edge from one configuration C1 to another C2, if C2 can be reached from C1 by an application of M s transition function(s). C2 C2 1 1 C1 C1 0 C3 Conf. graph of a DTM Conf. graph of an NTM

  48. Configuration graph Definition. A configurationgraph of a TM M on input x, denoted GM,x, is a directed graph whose nodes are all the possible configurations of M on input x. There s an edge from one configuration C1 to another C2, if C2 can be reached from C1 by an application of M s transition function(s). By erasing the contents of the work tape at the end, bringing the head at the beginning, and having a qaccept state, we can assume that there s a unique Caccept configuration. Configuration Cstart is well defined.

  49. Configuration graph Definition. A configurationgraph of a TM M on input x, denoted GM,x, is a directed graph whose nodes are all the possible configurations of M on input x. There s an edge from one configuration C1 to another C2, if C2 can be reached from C1 by an application of M s transition function(s). M accepts x if and only if there s a path from Cstart to Caccept in GM,x.

  50. Relation between time and space Obs. DTIME(S(n)) DSPACE(S(n)) NSPACE(S(n)). Theorem. NSPACE(S(n)) DTIME(2O(S(n))), if S is space constructible. Proof. Let L NSPACE(S(n)) and M be an NTM deciding L using O(S(n)) space on length n inputs. On input x, compute the configuration graph GM,x of M and check if there s a path from Cstart to Caccept . Running time is 2O(S(n)).

Related


More Related Content