Randomness in Computation: Insights and Prospects

undefined
RANDOMNESS VS. MEMORY:
Prospects and Barriers
Omer Reingold, Microsoft Research and Weizmann
With insights courtesy of
Moni Naor, Ran Raz, Luca
Trevisan, Salil Vadhan, Avi
Wigderson, many more …
Randomness In Computation (1)
 
Distributed computing 
(breaking
symmetry)
Cryptography
: Secrets, Semantic
Security, …
Sampling
, 
Simulations
, …
Randomness In Computation (2)
 
Communication Complexity 
(e.g.,
equality)
Routing 
(on the cube [Valiant]) -
drastically reduces congestion
Randomness In Computation (3)
 
In 
algorithms
 – useful design tool, but
many times can derandomize (e.g.,
PRIMES in P).  Is it always the case?
RL=L
 means that every randomized
algorithm can be derandomized with
only a constant factor increase in memory
Talk’s Premise: Many Frontiers of RL=L
And Beyond
 
Barriers of
previous proofs 
wealth of excellent
research problems.
RL 
 (NL 
) L
2
 
[Savitch 70]
0
0
1
1
1
1
1
0
0
0
 
poly(|x|)
configs
 
transitions on
current random bit
 
duplicate (running time T) 
 poly(|x|) times
 
s
 =
start config
 
t
 =
accept config
Configuration graph (per 
RL algorithm for
 
P
 & input
 
x
):
 
 x

P 
 random walk from 
s
 ends at 
t
 w.p. 
 ½
 
x
P
 
 
t
 unreachable from 
s
 
Enumerating all possible paths – too expensive. 
Main idea
:
1st half of computation only transmits 
log n
 bits to 2nd half
Oblivious Derandomiztion of RL
 
 
Pseudorandom generators that fool space-bounded
algorithms [AKS
87
, BNS
89
, Nisan
90
, NZ
93
, INW
94
]
Nisan’s generator has seed length 
log
2
n
Proof that RL in L
2
 via oblivious derandomization
Major tool in the study of RL vs. L
Applications beyond 
[Ind
00
, Siv
02
, KNO
05
,…]
Open problem
: PRGs with reduced seed length
Randomness Extractors
@
Your Service
 
Basic idea 
[NZ93] (related to Nisan’s generator):
Let 
log
-space 
A
 read a random
 
100
logn
 
bit string 
x
.
Since 
A
 remembers at most 
logn
 
bits, 
x
 
still contains
(roughly) 
99
logn
 
bits of entropy (independent of 
A
’s
state).
Can 
recycle 
x
:
Randomness Extractors
@
Your Service
 
NZ generator:
 
 
Possible setting of parameters: 
x
 is 
O(log n)
 long. Each
y
i
 is 
O(log
½
n)
 long and have 
log
½
n 
y
i
’s.
Expand 
O(log n)
 bits to 
O(log
3/2
n) 
(get any poly)
Error >> 
1/n 
([AKS87] gets almost 
log
2
n 
bits w. error
1/n
)
Randomness Extractors
@
Your Service
 
NZ generator:
 
 
Error >> 
1/n 
([AKS87] gets almost 
log
2
n 
bits w. error
1/n
)
Open: 
get any polynomial expansion w. error 
1/n
Open: 
super polynomial expansion with logarithmic
seed and constant error (partial result [RR99]).
Nisan,INW Generators via Extractors
 
Recall basic generator:
 
 
 
Lets flip it …
Nisan,INW Generators via Extractors
Given state of machine
in the middle, 
Ext(x,y
)
still 
-
random
Loss at each level: 
log n
(possible entropy in state).
+ log 1/
έ
 
for extractor
seed, where 
έ 
= 
/n
Altogether:
seed length = 
log 
2
 n
Nisan,INW + NZ 
 
RL=L
 
Let 
M
 be an RL machine
Using [Nisan] get 
M’
 that uses only 
log
2
n 
random bits
Fully derandomize 
M’
 using [NZ]
Or does it?
M’
 is not an RL machine (access to seed of [Nisan, INW]
not read once)
Still, natural approach – 
derandomize seed
 of [Nisan]
Can we build PRGs
from read once
ingredients?  Not too
promising …
RL 
 L
3/2
 
[SZ95] - “derandomized” [Nis]
 
Nisan’s generator has following properties:
Seed divided into 
h
 (length 
log
2
n
) and 
x
 (length
log
n
).
Given 
h
 in input tape, generator runs in L.
M
, w.h.p over 
h
, fixing 
h 
and ranging over 
x
 implies
a good generator for 
M
.
h 
is shorter if we generate less than 
n
 bits
[SZ95] - basic idea
 
Fix 
h
, divide run of 
M
 to segments:
Enumerate over 
x
, estimate all transition probs.
Replace each segment with a single transition
Recurse using 
the same 
h
Now 
M’
 depends on 
h
M’
 close to some 
t
-power
of 
M
. [SZ] perturb 
M’
 to
eliminate dependency on 
h
[SZ95] –further progress
 
Open: 
Translate [SZ] to a better generator against
space bounded algorithms!
Potentially, can then recursively apply [SZ] and get
better derandomization of RL (after constant
number of iterations may get RL in L
1+
)
Armoni showed an interesting extrapolation
between [NZ] and [INW] and as a result got a
slight improvement (RL in L
3/2
/(log L)
1/2
)
Thoughts on Improving INW
Loss at each level: 
log n
(possible entropy in state). 
+
log 1/
έ
 
for extractor seed,
where 
έ 
= 
/n
Avoiding loss due to entropy in state:
 [RR99] Recycle the entropy of the states.
 
Challenge:
 how to do it when do not know state probabilities?
Open
: better PRGs against constant width branching programs
Even for combinatorial
rectangles we do not know
“optimal” PRGs
Thoughts on Improving INW
 
x,y
 
Ext
(
x,y
)
Loss at each level: 
log n
(possible entropy in state). 
+
log 1/
έ
 
for extractor seed,
where 
έ 
= 
/n
Avoiding loss due to extractor seeds:
 Can we recycle 
y
 from previous computation?
 
Challenge:
 contain dependencies …
Do we need a seed at all? Use 
seedless extractors 
instead?
 
x
 
Ext
(
x,
y(x)
)
Thoughts on Improving INW
x,y
Ext
(
x,y
)
Loss at each level: 
log n
(possible entropy in state). 
+
log 1/
έ
 
for extractor seed,
where 
έ 
= 
/n
Extractor seed is long because we need to work with small error  
έ 
=
/n
 
Error reduction 
for PRGs?
If use error  
έ 
= 
/(log n) 
sequence still has some unpredictability
property, is it usable? (Yes for SL [R04,RozVad05]!)
Final Comment on Improving INW
 
Perhaps instead on reducing the loss per level we
should reduce the number of levels?
This means that at each level the number of
pseudorandom strings we have should increase more
rapidly (e.g., quadraticaly).
Specific approach based on ideas from Cryptography
(constructions of PRFs based on PR Synthesizers [NR]),
more complicated to apply here.
Its all About Graph Connectivity
 
Directed Connectivity captures NL
Undirected Connectivity is in L [R04].
Oblivious derandomization: 
pseudo-converging walks
for consistently labelled regular digraphs [R04,RTV05]
Where is RL on this scale?
Connectivity
 
 
for digraphs w/polynomial
mixing time [RTV05]
 
 
 Outgoing edges have labels.
 Consistent labelling means that each label forms a permutation on
vertices
 A walk on consistently labelled graph cannot lose entropy
undefined
Connectivity for undirected graphs [R04]
 
Connectivity
 
 
for regular digraphs [RTV05]
 
Pseudo-converging walks for consistently-labelled,
regular digraphs [R04
, 
, 
RTV05]
 
Pseudo-converging walks
 
 
for
regular digraphs [RTV05]
 
Connectivity
 
 
for digraphs w/polynomial
mixing time [RTV05]
RL
Towards RL vs. L?
Towards RL vs. L?
 It is not about 
reversibility 
but about 
regularity
 In fact it is about having 
estimates on stationary probabilities
[CRV07]
Some More Open Problems
 
Pseudo-converging walks on an (inconsistently labelled)
clique. (Similarly, universal traversal sequence).
Undirected 
Dirichlet Problem
:
Input: undirected graph G, a vertex s, a set B of
vertices, a function f: B 
 [0, 1].
Output: estimation of f(b) where b is 
the entry
point of the random walk into B
.
Conclusions
 
 
 
Richness of research directions and open problems
towards RL=L and 
beyond
:
PRGs against space bounded computations
Directed connectivity. Even if you think that NL=L
is plain crazy, many interesting questions and
some beautiful research …
undefined
Widescreen Test Pattern (16:9)
Aspect Ratio Test
(Should appear
circular)
1
6
x
9
4
x
3
Slide Note
Embed
Share

Delve into the realm of randomness in computation as discussed by prominent researchers including Moni Naor, Ran Raz, Luca Trevisan, Salil Vadhan, Avi Wigderson, and Omer Reingold. Uncover the significance of randomness in various facets of computing such as distributed computing, cryptography, communication complexity, and the debate on the necessity of randomness in algorithm design. Discover the frontiers and barriers in the realm of randomness versus memory, along with the applications of oblivious derandomization and randomness extractors in computational theory.

  • Randomness in Computation
  • Algorithm Design
  • Cryptography
  • Distributed Computing
  • Computation Theory

Uploaded on Nov 25, 2024 | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.

E N D

Presentation Transcript


  1. With insights courtesy of Moni Naor, Ran Raz, Luca Trevisan, Salil Vadhan, Avi Wigderson, many more RANDOMNESS VS. MEMORY: Prospects and Barriers Omer Reingold, Microsoft Research and Weizmann

  2. Randomness In Computation (1) Can t live without you Distributed computing (breaking symmetry) Cryptography: Secrets, Semantic Security, Sampling, Simulations,

  3. Randomness In Computation (2) You change my world Communication Complexity (e.g., equality) Routing (on the cube [Valiant]) - drastically reduces congestion

  4. Randomness In Computation (3) Do I really need you? In algorithms useful design tool, but many times can derandomize (e.g., PRIMES in P). Is it always the case? RL=L means that every randomized algorithm can be derandomized with only a constant factor increase in memory

  5. Talks Premise: Many Frontiers of RL=L RL in L3/2 Barriers of previous proofs wealth of excellent research problems. RL=L And Beyond

  6. RL (NL ) L2[Savitch 70] Configuration graph (per RL algorithm for P & input x): s = start config 0 1 0 poly(|x|) configs 1 0 1 0 1 t = accept config 1 0 x P x P random walk from s ends at t w.p. Enumerating all possible paths too expensive. Main idea: transitions on current random bit duplicate (running time T) poly(|x|) times t unreachable from s 1st half of computation only transmits log n bits to 2nd half

  7. Oblivious Derandomiztion of RL Pseudorandom generators that fool space-bounded algorithms [AKS87, BNS89, Nisan90, NZ93, INW94] Nisan s generator has seed length log2n Proof that RL in L2via oblivious derandomization Major tool in the study of RL vs. L Applications beyond [Ind00, Siv02, KNO05, ] Open problem: PRGs with reduced seed length

  8. Randomness Extractors@Your Service Basic idea [NZ93] (related to Nisan s generator): Let log-space A read a random 100logn bit string x. Since A remembers at most logn bits, x still contains (roughly) 99logn bits of entropy (independent of A s state). Can recycle x: G x,y x, Ext(x,y)

  9. Randomness Extractors@Your Service NZ generator: x,y1,y2, G x, Ext(x,y1), Ext(x,y2), Possible setting of parameters: x is O(log n) long. Each yiis O(log n) long and have log n yi s. Expand O(log n) bits to O(log3/2n) (get any poly) Error >> 1/n ([AKS87] gets almost log2n bits w. error 1/n)

  10. Randomness Extractors@Your Service NZ generator: x,y1,y2, G x, Ext(x,y1), Ext(x,y2), Error >> 1/n ([AKS87] gets almost log2n bits w. error 1/n) Open: get any polynomial expansion w. error 1/n Open: super polynomial expansion with logarithmic seed and constant error (partial result [RR99]).

  11. Nisan,INW Generators via Extractors Recall basic generator: G x,y x, Ext(x,y) Lets flip it

  12. Nisan,INW Generators via Extractors x,y Altogether: seed length = log 2n Given state of machine in the middle, Ext(x,y) still -random Loss at each level: log n (possible entropy in state). + log 1/ for extractor seed, where = /n Ext(x,y) x log n

  13. Nisan,INW + NZ RL=L Let M be an RL machine Can we build PRGs from read once ingredients? Not too promising Using [Nisan] get M that uses only log2n random bits Fully derandomize M using [NZ] Or does it? M is not an RL machine (access to seed of [Nisan, INW] not read once) Still, natural approach derandomize seed of [Nisan]

  14. RL L3/2[SZ95] - derandomized [Nis] Nisan s generator has following properties: Seed divided into h (length log2n) and x (length logn). Given h in input tape, generator runs in L. M, w.h.p over h, fixing h and ranging over x implies a good generator for M. h is shorter if we generate less than n bits

  15. [SZ95] - basic idea Fix h, divide run of M to segments: Enumerate over x, estimate all transition probs. Replace each segment with a single transition M close to some t-power of M. [SZ] perturb M to eliminate dependency on h Recurse using the same h Now M depends on h

  16. [SZ95] further progress Open: Translate [SZ] to a better generator against space bounded algorithms! Potentially, can then recursively apply [SZ] and get better derandomization of RL (after constant number of iterations may get RL in L1+ ) Armoni showed an interesting extrapolation between [NZ] and [INW] and as a result got a slight improvement (RL in L3/2/(log L)1/2)

  17. Thoughts on Improving INW Loss at each level: log n (possible entropy in state). + log 1/ for extractor seed, where = /n Even for combinatorial x,y rectangles we do not know optimal PRGs Ext(x,y) x Avoiding loss due to entropy in state: [RR99] Recycle the entropy of the states. Challenge: how to do it when do not know state probabilities? Open: better PRGs against constant width branching programs

  18. Thoughts on Improving INW Loss at each level: log n (possible entropy in state). + log 1/ for extractor seed, where = /n x,y x Ext(x,y) Ext(x,y(x)) x Avoiding loss due to extractor seeds: Can we recycle y from previous computation? Challenge: contain dependencies Do we need a seed at all? Use seedless extractors instead?

  19. Thoughts on Improving INW Loss at each level: log n (possible entropy in state). + log 1/ for extractor seed, where = /n x,y Ext(x,y) x Extractor seed is long because we need to work with small error = /n Error reduction for PRGs? If use error = /(log n) sequence still has some unpredictability property, is it usable? (Yes for SL [R04,RozVad05]!)

  20. Final Comment on Improving INW Perhaps instead on reducing the loss per level we should reduce the number of levels? This means that at each level the number of pseudorandom strings we have should increase more rapidly (e.g., quadraticaly). Specific approach based on ideas from Cryptography (constructions of PRFs based on PR Synthesizers [NR]), more complicated to apply here.

  21. Its all About Graph Connectivity Directed Connectivity captures NL Undirected Connectivity is in L [R04]. Oblivious derandomization: pseudo-converging walks for consistently labelled regular digraphs [R04,RTV05] Where is RL on this scale? Connectivity for digraphs w/polynomial mixing time [RTV05] A walk on consistently labelled graph cannot lose entropy Outgoing edges have labels. Consistent labelling means that each label forms a permutation on vertices

  22. Towards RL vs. L? Connectivity for undirected graphs [R04] Connectivity for regular digraphs [RTV05] in L It is not about reversibility but about regularity Pseudo-converging walks for consistently-labelled, regular digraphs [R04, RTV05] In fact it is about having estimates on stationary probabilities [CRV07] Pseudo-converging walks for regular digraphs [RTV05] Suffice to prove RL=L Connectivity for digraphs w/polynomial mixing time [RTV05] RL

  23. Some More Open Problems Pseudo-converging walks on an (inconsistently labelled) clique. (Similarly, universal traversal sequence). Undirected Dirichlet Problem: Input: undirected graph G, a vertex s, a set B of vertices, a function f: B [0, 1]. Output: estimation of f(b) where b is the entry point of the random walk into B.

  24. Conclusions Richness of research directions and open problems towards RL=L and beyond: PRGs against space bounded computations Directed connectivity. Even if you think that NL=L is plain crazy, many interesting questions and some beautiful research

  25. Widescreen Test Pattern (16:9) Aspect Ratio Test (Should appear circular) 4x3 16x9

More Related Content

giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#