Exploring Randomness in Computation: Insights and Prospects
Delve into the realm of randomness in computation as discussed by prominent researchers including Moni Naor, Ran Raz, Luca Trevisan, Salil Vadhan, Avi Wigderson, and Omer Reingold. Uncover the significance of randomness in various facets of computing such as distributed computing, cryptography, communication complexity, and the debate on the necessity of randomness in algorithm design. Discover the frontiers and barriers in the realm of randomness versus memory, along with the applications of oblivious derandomization and randomness extractors in computational theory.
Download Presentation
Please find below an Image/Link to download the presentation.
The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. Download presentation by click this link. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.
E N D
Presentation Transcript
With insights courtesy of Moni Naor, Ran Raz, Luca Trevisan, Salil Vadhan, Avi Wigderson, many more RANDOMNESS VS. MEMORY: Prospects and Barriers Omer Reingold, Microsoft Research and Weizmann
Randomness In Computation (1) Can t live without you Distributed computing (breaking symmetry) Cryptography: Secrets, Semantic Security, Sampling, Simulations,
Randomness In Computation (2) You change my world Communication Complexity (e.g., equality) Routing (on the cube [Valiant]) - drastically reduces congestion
Randomness In Computation (3) Do I really need you? In algorithms useful design tool, but many times can derandomize (e.g., PRIMES in P). Is it always the case? RL=L means that every randomized algorithm can be derandomized with only a constant factor increase in memory
Talks Premise: Many Frontiers of RL=L RL in L3/2 Barriers of previous proofs wealth of excellent research problems. RL=L And Beyond
RL (NL ) L2[Savitch 70] Configuration graph (per RL algorithm for P & input x): s = start config 0 1 0 poly(|x|) configs 1 0 1 0 1 t = accept config 1 0 x P x P random walk from s ends at t w.p. Enumerating all possible paths too expensive. Main idea: transitions on current random bit duplicate (running time T) poly(|x|) times t unreachable from s 1st half of computation only transmits log n bits to 2nd half
Oblivious Derandomiztion of RL Pseudorandom generators that fool space-bounded algorithms [AKS87, BNS89, Nisan90, NZ93, INW94] Nisan s generator has seed length log2n Proof that RL in L2via oblivious derandomization Major tool in the study of RL vs. L Applications beyond [Ind00, Siv02, KNO05, ] Open problem: PRGs with reduced seed length
Randomness Extractors@Your Service Basic idea [NZ93] (related to Nisan s generator): Let log-space A read a random 100logn bit string x. Since A remembers at most logn bits, x still contains (roughly) 99logn bits of entropy (independent of A s state). Can recycle x: G x,y x, Ext(x,y)
Randomness Extractors@Your Service NZ generator: x,y1,y2, G x, Ext(x,y1), Ext(x,y2), Possible setting of parameters: x is O(log n) long. Each yiis O(log n) long and have log n yi s. Expand O(log n) bits to O(log3/2n) (get any poly) Error >> 1/n ([AKS87] gets almost log2n bits w. error 1/n)
Randomness Extractors@Your Service NZ generator: x,y1,y2, G x, Ext(x,y1), Ext(x,y2), Error >> 1/n ([AKS87] gets almost log2n bits w. error 1/n) Open: get any polynomial expansion w. error 1/n Open: super polynomial expansion with logarithmic seed and constant error (partial result [RR99]).
Nisan,INW Generators via Extractors Recall basic generator: G x,y x, Ext(x,y) Lets flip it
Nisan,INW Generators via Extractors x,y Altogether: seed length = log 2n Given state of machine in the middle, Ext(x,y) still -random Loss at each level: log n (possible entropy in state). + log 1/ for extractor seed, where = /n Ext(x,y) x log n
Nisan,INW + NZ RL=L Let M be an RL machine Can we build PRGs from read once ingredients? Not too promising Using [Nisan] get M that uses only log2n random bits Fully derandomize M using [NZ] Or does it? M is not an RL machine (access to seed of [Nisan, INW] not read once) Still, natural approach derandomize seed of [Nisan]
RL L3/2[SZ95] - derandomized [Nis] Nisan s generator has following properties: Seed divided into h (length log2n) and x (length logn). Given h in input tape, generator runs in L. M, w.h.p over h, fixing h and ranging over x implies a good generator for M. h is shorter if we generate less than n bits
[SZ95] - basic idea Fix h, divide run of M to segments: Enumerate over x, estimate all transition probs. Replace each segment with a single transition M close to some t-power of M. [SZ] perturb M to eliminate dependency on h Recurse using the same h Now M depends on h
[SZ95] further progress Open: Translate [SZ] to a better generator against space bounded algorithms! Potentially, can then recursively apply [SZ] and get better derandomization of RL (after constant number of iterations may get RL in L1+ ) Armoni showed an interesting extrapolation between [NZ] and [INW] and as a result got a slight improvement (RL in L3/2/(log L)1/2)
Thoughts on Improving INW Loss at each level: log n (possible entropy in state). + log 1/ for extractor seed, where = /n Even for combinatorial x,y rectangles we do not know optimal PRGs Ext(x,y) x Avoiding loss due to entropy in state: [RR99] Recycle the entropy of the states. Challenge: how to do it when do not know state probabilities? Open: better PRGs against constant width branching programs
Thoughts on Improving INW Loss at each level: log n (possible entropy in state). + log 1/ for extractor seed, where = /n x,y x Ext(x,y) Ext(x,y(x)) x Avoiding loss due to extractor seeds: Can we recycle y from previous computation? Challenge: contain dependencies Do we need a seed at all? Use seedless extractors instead?
Thoughts on Improving INW Loss at each level: log n (possible entropy in state). + log 1/ for extractor seed, where = /n x,y Ext(x,y) x Extractor seed is long because we need to work with small error = /n Error reduction for PRGs? If use error = /(log n) sequence still has some unpredictability property, is it usable? (Yes for SL [R04,RozVad05]!)
Final Comment on Improving INW Perhaps instead on reducing the loss per level we should reduce the number of levels? This means that at each level the number of pseudorandom strings we have should increase more rapidly (e.g., quadraticaly). Specific approach based on ideas from Cryptography (constructions of PRFs based on PR Synthesizers [NR]), more complicated to apply here.
Its all About Graph Connectivity Directed Connectivity captures NL Undirected Connectivity is in L [R04]. Oblivious derandomization: pseudo-converging walks for consistently labelled regular digraphs [R04,RTV05] Where is RL on this scale? Connectivity for digraphs w/polynomial mixing time [RTV05] A walk on consistently labelled graph cannot lose entropy Outgoing edges have labels. Consistent labelling means that each label forms a permutation on vertices
Towards RL vs. L? Connectivity for undirected graphs [R04] Connectivity for regular digraphs [RTV05] in L It is not about reversibility but about regularity Pseudo-converging walks for consistently-labelled, regular digraphs [R04, RTV05] In fact it is about having estimates on stationary probabilities [CRV07] Pseudo-converging walks for regular digraphs [RTV05] Suffice to prove RL=L Connectivity for digraphs w/polynomial mixing time [RTV05] RL
Some More Open Problems Pseudo-converging walks on an (inconsistently labelled) clique. (Similarly, universal traversal sequence). Undirected Dirichlet Problem: Input: undirected graph G, a vertex s, a set B of vertices, a function f: B [0, 1]. Output: estimation of f(b) where b is the entry point of the random walk into B.
Conclusions Richness of research directions and open problems towards RL=L and beyond: PRGs against space bounded computations Directed connectivity. Even if you think that NL=L is plain crazy, many interesting questions and some beautiful research
Widescreen Test Pattern (16:9) Aspect Ratio Test (Should appear circular) 4x3 16x9