Peer-to-Peer Systems and Distributed Hash Tables

P
e
e
r
-
t
o
-
P
e
e
r
 
S
y
s
t
e
m
s
 
a
n
d
D
i
s
t
r
i
b
u
t
e
d
 
H
a
s
h
 
T
a
b
l
e
s
COS 418: 
Advanced Computer Systems
Lecture 8
Mike Freedman
[
C
r
e
d
i
t
:
 
S
l
i
d
e
s
 
A
d
a
p
t
e
d
 
f
r
o
m
 
K
y
l
e
 
J
a
m
i
e
s
o
n
 
a
n
d
 
D
a
n
i
e
l
 
S
u
o
]
1.
P
e
e
r
-
t
o
-
P
e
e
r
 
S
y
s
t
e
m
s
N
a
p
s
t
e
r
,
 
G
n
u
t
e
l
l
a
,
 
B
i
t
T
o
r
r
e
n
t
,
 
c
h
a
l
l
e
n
g
e
s
2.
Distributed Hash Tables
3.
The Chord Lookup Service
4.
Concluding thoughts on DHTs, P2P
2
T
o
d
a
y
A
 
d
i
s
t
r
i
b
u
t
e
d
 
s
y
s
t
e
m
 
a
r
c
h
i
t
e
c
t
u
r
e
:
N
o
 
c
e
n
t
r
a
l
i
z
e
d
 
c
o
n
t
r
o
l
N
o
d
e
s
 
a
r
e
 
r
o
u
g
h
l
y
 
s
y
m
m
e
t
r
i
c
 
i
n
 
f
u
n
c
t
i
o
n
L
a
r
g
e
 
n
u
m
b
e
r
 
o
f
 
u
n
r
e
l
i
a
b
l
e
 
n
o
d
e
s
3
W
h
a
t
 
i
s
 
a
 
P
e
e
r
-
t
o
-
P
e
e
r
 
(
P
2
P
)
 
s
y
s
t
e
m
?
N
o
d
e
N
o
d
e
N
o
d
e
N
o
d
e
N
o
d
e
I
n
t
e
r
n
e
t
 
H
i
g
h
 
c
a
p
a
c
i
t
y
 
f
o
r
 
s
e
r
v
i
c
e
s
 
t
h
r
o
u
g
h
 
p
a
r
a
l
l
e
l
i
s
m
:
Many disks
Many network connections
Many CPUs
 
A
b
s
e
n
c
e
 
o
f
 
a
 
c
e
n
t
r
a
l
i
z
e
d
 
s
e
r
v
e
r
 
m
a
y
 
m
e
a
n
:
L
e
s
s
 
c
h
a
n
c
e
 
o
f
 
s
e
r
v
i
c
e
 
o
v
e
r
l
o
a
d
 
a
s
 
l
o
a
d
 
i
n
c
r
e
a
s
e
s
E
a
s
i
e
r
 
d
e
p
l
o
y
m
e
n
t
A
 
s
i
n
g
l
e
 
f
a
i
l
u
r
e
 
w
o
n
t
 
w
r
e
c
k
 
t
h
e
 
w
h
o
l
e
 
s
y
s
t
e
m
S
y
s
t
e
m
 
a
s
 
a
 
w
h
o
l
e
 
i
s
 
h
a
r
d
e
r
 
t
o
 
a
t
t
a
c
k
4
W
h
y
 
m
i
g
h
t
 
P
2
P
 
b
e
 
a
 
w
i
n
?
S
u
c
c
e
s
s
f
u
l
 
a
d
o
p
t
i
o
n
 
i
n
 
s
o
m
e
 
n
i
c
h
e
 
a
r
e
a
s
1.
C
l
i
e
n
t
-
t
o
-
c
l
i
e
n
t
 
(
l
e
g
a
l
,
 
i
l
l
e
g
a
l
)
 
f
i
l
e
 
s
h
a
r
i
n
g
2.
D
i
g
i
t
a
l
 
c
u
r
r
e
n
c
y
:
 
n
o
 
n
a
t
u
r
a
l
 
s
i
n
g
l
e
 
o
w
n
e
r
 
(
B
i
t
c
o
i
n
)
3.
V
o
i
c
e
/
v
i
d
e
o
 
t
e
l
e
p
h
o
n
y
:
 
u
s
e
r
 
t
o
 
u
s
e
r
 
a
n
y
w
a
y
Issues: Privacy and control
5
P
2
P
 
a
d
o
p
t
i
o
n
 
1.
User clicks on download link
G
e
t
s
 
t
o
r
r
e
n
t
 
f
i
l
e
 
w
i
t
h
 
c
o
n
t
e
n
t
 
h
a
s
h
,
 
I
P
 
a
d
d
r
 
o
f
 
t
r
a
c
k
e
r
2.
User’s BitTorrent (BT) client talks to tracker
T
r
a
c
k
e
r
 
t
e
l
l
s
 
i
t
 
l
i
s
t
 
o
f
 
p
e
e
r
s
 
w
h
o
 
h
a
v
e
 
f
i
l
e
3.
User’s BT client downloads file from peers
4.
User’s BT client tells tracker it has a copy now, too
5.
User’s BT client serves the file to others for a while
6
E
x
a
m
p
l
e
:
 
C
l
a
s
s
i
c
 
B
i
t
T
o
r
r
e
n
t
P
r
o
v
i
d
e
s
 
h
u
g
e
 
d
o
w
n
l
o
a
d
 
b
a
n
d
w
i
d
t
h
,
w
i
t
h
o
u
t
 
e
x
p
e
n
s
i
v
e
 
s
e
r
v
e
r
 
o
r
 
n
e
t
w
o
r
k
 
l
i
n
k
s
7
T
h
e
 
l
o
o
k
u
p
 
p
r
o
b
l
e
m
N
1
N
2
N
3
N
6
N
5
P
u
b
l
i
s
h
e
r
 
(
N
4
)
C
l
i
e
n
t
 
?
put(“Pacific Rim.mp4”,
[content])
get(
Pacific Rim.mp4
)
8
C
e
n
t
r
a
l
i
z
e
d
 
l
o
o
k
u
p
 
(
N
a
p
s
t
e
r
)
N
1
N
2
N
3
N
6
N
5
P
u
b
l
i
s
h
e
r
 
(
N
4
)
C
l
i
e
n
t
SetLoc(“Pacific Rim.mp4”,
IP address of N
4
)
Lookup(
Pacific
Rim.mp4
)
D
B
key=“Pacific Rim.mp4”,
value=[content]
S
i
m
p
l
e
,
 
b
u
t
 
O
(
N
)
 
s
t
a
t
e
 
a
n
d
 
a
s
i
n
g
l
e
 
p
o
i
n
t
 
o
f
 
f
a
i
l
u
r
e
9
F
l
o
o
d
e
d
 
q
u
e
r
i
e
s
 
(
o
r
i
g
i
n
a
l
 
G
n
u
t
e
l
l
a
)
N
1
N
2
N
3
N
6
N
5
P
u
b
l
i
s
h
e
r
 
(
N
4
)
C
l
i
e
n
t
Lookup(
Pacific
Rim.mp4
)
key=“Star Wars.mov”,
value=[content]
R
o
b
u
s
t
,
 
b
u
t
 
O
(
N
 
=
 
n
u
m
b
e
r
 
o
f
 
p
e
e
r
s
)
m
e
s
s
a
g
e
s
 
p
e
r
 
l
o
o
k
u
p
10
10
R
o
u
t
e
d
 
D
H
T
 
q
u
e
r
i
e
s
 
(
C
h
o
r
d
)
N
1
N
2
N
3
N
6
N
5
P
u
b
l
i
s
h
e
r
 
(
N
4
)
C
l
i
e
n
t
Lookup(H(data
)
)
key=“H(audio data)”,
value=[content]
C
a
n
 
w
e
 
m
a
k
e
 
i
t
 
r
o
b
u
s
t
,
 
r
e
a
s
o
n
a
b
l
e
s
t
a
t
e
,
 
r
e
a
s
o
n
a
b
l
e
 
n
u
m
b
e
r
 
o
f
 
h
o
p
s
?
1.
Peer-to-Peer Systems
2.
D
i
s
t
r
i
b
u
t
e
d
 
H
a
s
h
 
T
a
b
l
e
s
3.
The Chord Lookup Service
4.
Concluding thoughts on DHTs, P2P
11
11
T
o
d
a
y
12
12
W
h
a
t
 
i
s
 
a
 
D
H
T
 
(
a
n
d
 
w
h
y
)
?
H
o
w
 
c
a
n
 
I
 
d
o
 
(
r
o
u
g
h
l
y
)
 
t
h
i
s
 
a
c
r
o
s
s
m
i
l
l
i
o
n
s
 
o
f
 
h
o
s
t
s
 
o
n
 
t
h
e
 
I
n
t
e
r
n
e
t
?
D
i
s
t
r
i
b
u
t
e
d
 
H
a
s
h
 
T
a
b
l
e
 
(
D
H
T
)
L
o
c
a
l
 
h
a
s
h
 
t
a
b
l
e
:
key = Hash(name)
put(key, value)
get(key) 
 value
S
e
r
v
i
c
e
:
 
C
o
n
s
t
a
n
t
-
t
i
m
e
 
i
n
s
e
r
t
i
o
n
 
a
n
d
 
l
o
o
k
u
p
13
13
W
h
a
t
 
i
s
 
a
 
D
H
T
 
(
a
n
d
 
w
h
y
)
?
 
Distributed Hash Table:
 
key = hash(data)
l
o
o
k
u
p
(
k
e
y
)
 
 
I
P
 
a
d
d
r
(
C
h
o
r
d
 
l
o
o
k
u
p
 
s
e
r
v
i
c
e
)
 
send-RPC(IP address, 
put
, key, data)
 
send-RPC(IP address, 
get
, key) 
 data
 
P
a
r
t
i
t
i
o
n
i
n
g
 
d
a
t
a
 
i
n
 
l
a
r
g
e
-
s
c
a
l
e
 
d
i
s
t
r
i
b
u
t
e
d
 
s
y
s
t
e
m
s
Tuples in a global database engine
Data blocks in a global file system
Files in a P2P file-sharing system
A
p
p
 
m
a
y
 
b
e
 
d
i
s
t
r
i
b
u
t
e
d
 
o
v
e
r
 
m
a
n
y
 
n
o
d
e
s
D
H
T
 
d
i
s
t
r
i
b
u
t
e
s
 
d
a
t
a
 
s
t
o
r
a
g
e
 
o
v
e
r
 
m
a
n
y
 
n
o
d
e
s
14
14
C
o
o
p
e
r
a
t
i
v
e
 
s
t
o
r
a
g
e
 
w
i
t
h
 
a
 
D
H
T
D
i
s
t
r
i
b
u
t
e
d
 
h
a
s
h
 
t
a
b
l
e
D
i
s
t
r
i
b
u
t
e
d
 
a
p
p
l
i
c
a
t
i
o
n
g
e
t
 
(
k
e
y
)
d
a
t
a
p
u
t
(
k
e
y
,
 
d
a
t
a
)
L
o
o
k
u
p
 
s
e
r
v
i
c
e
l
o
o
k
u
p
(
k
e
y
)
n
o
d
e
 
I
P
 
a
d
d
r
e
s
s
(
D
H
a
s
h
)
(
C
h
o
r
d
)
BitTorrent can use DHT instead of (or with) a tracker
BitTorrent clients use DHT:
K
e
y
 
=
 
f
i
l
e
 
c
o
n
t
e
n
t
 
h
a
s
h
 
(
i
n
f
o
h
a
s
h
)
V
a
l
u
e
 
=
 
I
P
 
a
d
d
r
e
s
s
 
o
f
 
p
e
e
r
 
w
i
l
l
i
n
g
 
t
o
 
s
e
r
v
e
 
f
i
l
e
Can store multiple values (
i.e.
 IP addresses) for a key
Client does:
get(infohash)
 to find other clients willing to serve
put(infohash, my-ipaddr)
 
to identify itself as willing
15
15
B
i
t
T
o
r
r
e
n
t
 
o
v
e
r
 
D
H
T
API supports a wide range of applications
DHT imposes no structure/meaning on keys
Key/value pairs are persistent and global
Can store keys in other DHT values
And thus build complex data structures
16
16
W
h
y
 
t
h
e
 
p
u
t
/
g
e
t
 
D
H
T
 
i
n
t
e
r
f
a
c
e
?
Decentralized: no central authority
Scalable: low network traffic overhead
Efficient: find items quickly (latency)
Dynamic: nodes fail, new nodes join
17
17
W
h
y
 
m
i
g
h
t
 
D
H
T
 
d
e
s
i
g
n
 
b
e
 
h
a
r
d
?
1.
Peer-to-Peer Systems
2.
Distributed Hash Tables
3.
T
h
e
 
C
h
o
r
d
 
L
o
o
k
u
p
 
S
e
r
v
i
c
e
B
a
s
i
c
 
d
e
s
i
g
n
Integration with 
DHash
 DHT, performance
18
18
T
o
d
a
y
I
n
t
e
r
f
a
c
e
:
 
l
o
o
k
u
p
(
k
e
y
)
 
 
I
P
 
a
d
d
r
e
s
s
E
f
f
i
c
i
e
n
t
:
 
O
(
l
o
g
 
N
)
 
m
e
s
s
a
g
e
s
 
p
e
r
 
l
o
o
k
u
p
N is the total number of servers
S
c
a
l
a
b
l
e
:
 
O
(
l
o
g
 
N
)
 
s
t
a
t
e
 
p
e
r
 
n
o
d
e
R
o
b
u
s
t
:
 
s
u
r
v
i
v
e
s
 
m
a
s
s
i
v
e
 
f
a
i
l
u
r
e
s
S
i
m
p
l
e
 
t
o
 
a
n
a
l
y
z
e
19
19
C
h
o
r
d
 
l
o
o
k
u
p
 
a
l
g
o
r
i
t
h
m
 
p
r
o
p
e
r
t
i
e
s
K
e
y
 
i
d
e
n
t
i
f
i
e
r
 
=
 
S
H
A
-
1
(
k
e
y
)
N
o
d
e
 
i
d
e
n
t
i
f
i
e
r
 
=
 
S
H
A
-
1
(
I
P
 
a
d
d
r
e
s
s
)
SHA-1 distributes both uniformly
H
o
w
 
d
o
e
s
 
C
h
o
r
d
 
p
a
r
t
i
t
i
o
n
 
d
a
t
a
?
i.e.
, map key IDs to node IDs
20
20
C
h
o
r
d
 
i
d
e
n
t
i
f
i
e
r
s
21
21
C
o
n
s
i
s
t
e
n
t
 
h
a
s
h
i
n
g
 
[
K
a
r
g
e
r
 
9
7
]
K
e
y
 
i
s
 
s
t
o
r
e
d
 
a
t
 
i
t
s
 
s
u
c
c
e
s
s
o
r
:
 
n
o
d
e
 
w
i
t
h
 
n
e
x
t
-
h
i
g
h
e
r
 
I
D
22
22
C
h
o
r
d
:
 
S
u
c
c
e
s
s
o
r
 
p
o
i
n
t
e
r
s
K
8
0
N
3
2
N
9
0
N
1
0
5
N
1
0
N
6
0
N
1
2
0
23
23
B
a
s
i
c
 
l
o
o
k
u
p
K
8
0
N
3
2
N
9
0
N
1
0
5
N
1
0
N
6
0
N
1
2
0
W
h
e
r
e
 
i
s
 
K
8
0
?
24
24
S
i
m
p
l
e
 
l
o
o
k
u
p
 
a
l
g
o
r
i
t
h
m
Lookup
(key-id)
 
succ 
 my successor
 
if
 my-id 
<
 succ 
<
 key-id   
// next hop
  
 call Lookup(key-id) on succ
 
else
   
  
     
// done
   
return
 succ
 
  
 
 
C
o
r
r
e
c
t
n
e
s
s
 
d
e
p
e
n
d
s
 
o
n
l
y
 
o
n
 
s
u
c
c
e
s
s
o
r
s
 
P
r
o
b
l
e
m
:
 
F
o
r
w
a
r
d
i
n
g
 
t
h
r
o
u
g
h
 
s
u
c
c
e
s
s
o
r
 
i
s
 
s
l
o
w
 
Data structure is a linked list: O(n)
 
I
d
e
a
:
 
C
a
n
 
w
e
 
m
a
k
e
 
i
t
 
m
o
r
e
 
l
i
k
e
 
a
 
b
i
n
a
r
y
 
s
e
a
r
c
h
?
Need to be able to halve distance at each step
25
25
I
m
p
r
o
v
i
n
g
 
p
e
r
f
o
r
m
a
n
c
e
26
26
F
i
n
g
e
r
 
t
a
b
l
e
 
a
l
l
o
w
s
 
l
o
g
 
N
-
t
i
m
e
 
l
o
o
k
u
p
s
N
8
0
½
¼
1
/
8
1
/
1
6
1
/
3
2
1
/
6
4
27
27
F
i
n
g
e
r
 
i
 
P
o
i
n
t
s
 
t
o
 
S
u
c
c
e
s
s
o
r
 
o
f
 
n
+
2
i
N
8
0
½
¼
1
/
8
1
/
1
6
1
/
3
2
1
/
6
4
K112
N
1
2
0
A
 
b
i
n
a
r
y
 
l
o
o
k
u
p
 
t
r
e
e
 
r
o
o
t
e
d
 
a
t
 
e
v
e
r
y
 
n
o
d
e
Threaded through other nodes' finger tables
Better than arranging nodes in a single tree
Every node acts as a root
S
o
 
t
h
e
r
e
'
s
 
n
o
 
r
o
o
t
 
h
o
t
s
p
o
t
N
o
 
s
i
n
g
l
e
 
p
o
i
n
t
 
o
f
 
f
a
i
l
u
r
e
B
u
t
 
a
 
l
o
t
 
m
o
r
e
 
s
t
a
t
e
 
i
n
 
t
o
t
a
l
28
28
I
m
p
l
i
c
a
t
i
o
n
 
o
f
 
f
i
n
g
e
r
 
t
a
b
l
e
s
29
29
L
o
o
k
u
p
 
w
i
t
h
 
f
i
n
g
e
r
 
t
a
b
l
e
Lookup
(key-id)
 
look in local finger table for
  
  
  highest n: my-id 
<
 n 
<
 key-id
 
if
 n exists
  
  call Lookup(key-id) on node n 
 
// next hop
 
else
  
  
return
 my successor
 
// done
 
30
30
L
o
o
k
u
p
s
 
T
a
k
e
 
O
(
l
o
g
 
N
)
 
H
o
p
s
N
3
2
N
1
0
N
5
N
2
0
N
1
1
0
N
9
9
N
8
0
N
6
0
L
o
o
k
u
p
(
K
1
9
)
K19
For a million nodes, it’s 20 hops
I
f
 
e
a
c
h
 
h
o
p
 
t
a
k
e
s
 
5
0
m
s
,
 
l
o
o
k
u
p
s
 
t
a
k
e
 
a
 
s
e
c
o
n
d
If each hop has 10% chance of failure, it’s a couple of timeouts
S
o
 
i
n
 
p
r
a
c
t
i
c
e
 
l
o
g
(
n
)
 
i
s
 
b
e
t
t
e
r
 
t
h
a
n
 
O
(
n
)
 
b
u
t
 
n
o
t
 
g
r
e
a
t
31
31
A
n
 
a
s
i
d
e
:
 
I
s
 
l
o
g
(
n
)
 
f
a
s
t
 
o
r
 
s
l
o
w
?
32
32
J
o
i
n
i
n
g
:
 
L
i
n
k
e
d
 
l
i
s
t
 
i
n
s
e
r
t
N
3
6
N
4
0
N
2
5
1
.
 
L
o
o
k
u
p
(
3
6
)
K
3
0
K
3
8
33
33
J
o
i
n
 
(
2
)
N
3
6
N
4
0
N
2
5
2
.
 
N
3
6
 
s
e
t
s
 
i
t
s
 
o
w
n
s
u
c
c
e
s
s
o
r
 
p
o
i
n
t
e
r
K
3
0
K
3
8
34
34
J
o
i
n
 
(
3
)
N
3
6
N
4
0
N
2
5
3
.
 
C
o
p
y
 
k
e
y
s
 
2
6
.
.
3
6
f
r
o
m
 
N
4
0
 
t
o
 
N
3
6
K
3
0
K
3
8
K
3
0
35
35
N
o
t
i
f
y
 
m
a
i
n
t
a
i
n
s
 
p
r
e
d
e
c
e
s
s
o
r
s
N
3
6
N
4
0
N
2
5
n
o
t
i
f
y
 
N
3
6
n
o
t
i
f
y
 
N
2
5
36
36
S
t
a
b
i
l
i
z
e
 
m
e
s
s
a
g
e
 
f
i
x
e
s
 
s
u
c
c
e
s
s
o
r
N
3
6
N
4
0
N
2
5
s
t
a
b
i
l
i
z
e
M
y
 
p
r
e
d
e
c
e
s
s
o
r
i
s
 
N
3
6
.
 
 
37
37
J
o
i
n
i
n
g
:
 
S
u
m
m
a
r
y
N
3
6
N
4
0
N
2
5
K
3
0
K
3
8
K
3
0
Predecessor pointer allows link to new node
Update finger pointers in the background
Correct successors produce correct lookups
38
38
F
a
i
l
u
r
e
s
 
m
a
y
 
c
a
u
s
e
 
i
n
c
o
r
r
e
c
t
 
l
o
o
k
u
p
N
1
2
0
N
1
1
3
N
1
0
2
N
8
0
N
8
5
N
8
0
 
d
o
e
s
 
n
o
t
 
k
n
o
w
 
c
o
r
r
e
c
t
s
u
c
c
e
s
s
o
r
,
 
s
o
 
i
n
c
o
r
r
e
c
t
 
l
o
o
k
u
p
N
1
0
L
o
o
k
u
p
(
K
9
0
)
39
39
S
u
c
c
e
s
s
o
r
 
l
i
s
t
s
E
a
c
h
 
n
o
d
e
 
s
t
o
r
e
s
 
a
 
l
i
s
t
 
o
f
 
i
t
s
 
r
 
i
m
m
e
d
i
a
t
e
 
s
u
c
c
e
s
s
o
r
s
After failure, will know first live successor
C
o
r
r
e
c
t
 
s
u
c
c
e
s
s
o
r
s
 
g
u
a
r
a
n
t
e
e
 
c
o
r
r
e
c
t
 
l
o
o
k
u
p
s
Guarantee is with some probability
40
40
C
h
o
o
s
i
n
g
 
s
u
c
c
e
s
s
o
r
 
l
i
s
t
 
l
e
n
g
t
h
A
s
s
u
m
e
 
o
n
e
 
h
a
l
f
 
o
f
 
t
h
e
 
n
o
d
e
s
 
f
a
i
l
P(successor list all dead) = 
(½)
r
i.e.
, P(this node breaks the Chord ring)
Depends on independent failure
S
u
c
c
e
s
s
o
r
 
l
i
s
t
 
o
f
 
s
i
z
e
 
r
 
=
 
O
(
l
o
g
 
N
)
 
m
a
k
e
s
 
t
h
i
s
p
r
o
b
a
b
i
l
i
t
y
 
1
/
N
:
 
l
o
w
 
f
o
r
 
l
a
r
g
e
 
N
41
41
L
o
o
k
u
p
 
w
i
t
h
 
f
a
u
l
t
 
t
o
l
e
r
a
n
c
e
Lookup
(key-id)
 
look in local finger table 
and successor-list
 
  
  for highest n: my-id 
< 
n 
< 
key-id
 
if
 n exists
  
  call Lookup(key-id) on node n
 
 
// next hop
  
  
if call failed,
   
  remove n from finger table and/or
     
successor list
   
  return Lookup(key-id)
 
else
     
return
 my successor
 
 
// done
1.
Peer-to-Peer Systems
2.
Distributed Hash Tables
3.
T
h
e
 
C
h
o
r
d
 
L
o
o
k
u
p
 
S
e
r
v
i
c
e
Basic design
I
n
t
e
g
r
a
t
i
o
n
 
w
i
t
h
 
D
H
a
s
h
 
D
H
T
,
 
p
e
r
f
o
r
m
a
n
c
e
42
42
T
o
d
a
y
43
43
T
h
e
 
D
H
a
s
h
 
D
H
T
 
Builds key/value storage on Chord
 
R
e
p
l
i
c
a
t
e
s
 
b
l
o
c
k
s
 
f
o
r
 
a
v
a
i
l
a
b
i
l
i
t
y
S
t
o
r
e
s
 
k
 
r
e
p
l
i
c
a
s
 
a
t
 
t
h
e
 
k
 
s
u
c
c
e
s
s
o
r
s
 
a
f
t
e
r
 
t
h
e
 
b
l
o
c
k
 
o
n
 
t
h
e
 
C
h
o
r
d
 
r
i
n
g
 
C
a
c
h
e
s
 
b
l
o
c
k
s
 
f
o
r
 
l
o
a
d
 
b
a
l
a
n
c
i
n
g
C
l
i
e
n
t
 
s
e
n
d
s
 
c
o
p
y
 
o
f
 
b
l
o
c
k
 
t
o
 
e
a
c
h
 
s
e
r
v
e
r
 
i
t
 
c
o
n
t
a
c
t
e
d
 
a
l
o
n
g
 
l
o
o
k
u
p
 
p
a
t
h
 
A
u
t
h
e
n
t
i
c
a
t
e
s
 
b
l
o
c
k
 
c
o
n
t
e
n
t
s
44
44
D
H
a
s
h
 
d
a
t
a
 
a
u
t
h
e
n
t
i
c
a
t
i
o
n
Two types of DHash blocks:
C
o
n
t
e
n
t
-
h
a
s
h
:
 
k
e
y
 
=
 
S
H
A
-
1
(
d
a
t
a
)
P
u
b
l
i
c
-
k
e
y
:
 
D
a
t
a
 
s
i
g
n
e
d
 
b
y
 
c
o
r
r
e
s
p
o
n
d
i
n
g
 
p
r
i
v
a
t
e
 
k
e
y
Chord File System example:
R
e
p
l
i
c
a
s
 
a
r
e
 
e
a
s
y
 
t
o
 
f
i
n
d
 
i
f
 
s
u
c
c
e
s
s
o
r
 
f
a
i
l
s
H
a
s
h
e
d
 
n
o
d
e
 
I
D
s
 
e
n
s
u
r
e
 
i
n
d
e
p
e
n
d
e
n
t
 
f
a
i
l
u
r
e
45
45
D
H
a
s
h
 
r
e
p
l
i
c
a
t
e
s
 
b
l
o
c
k
s
 
a
t
 
r
 
s
u
c
c
e
s
s
o
r
s
46
46
E
x
p
e
r
i
m
e
n
t
a
l
 
o
v
e
r
v
i
e
w
Q
u
i
c
k
 
l
o
o
k
u
p
 
i
n
 
l
a
r
g
e
 
s
y
s
t
e
m
s
L
o
w
 
v
a
r
i
a
t
i
o
n
 
i
n
 
l
o
o
k
u
p
 
c
o
s
t
s
R
o
b
u
s
t
 
d
e
s
p
i
t
e
 
m
a
s
s
i
v
e
 
f
a
i
l
u
r
e
G
o
a
l
:
 
E
x
p
e
r
i
m
e
n
t
a
l
l
y
 
c
o
n
f
i
r
m
t
h
e
o
r
e
t
i
c
a
l
 
r
e
s
u
l
t
s
47
47
C
h
o
r
d
 
l
o
o
k
u
p
 
c
o
s
t
 
i
s
 
O
(
l
o
g
 
N
)
N
u
m
b
e
r
 
o
f
 
N
o
d
e
s
A
v
e
r
a
g
e
 
M
e
s
s
a
g
e
s
 
p
e
r
 
L
o
o
k
u
p
C
o
n
s
t
a
n
t
 
i
s
 
1
/
2
48
48
F
a
i
l
u
r
e
 
e
x
p
e
r
i
m
e
n
t
 
s
e
t
u
p
S
t
a
r
t
 
1
,
0
0
0
 
C
h
o
r
d
 
s
e
r
v
e
r
s
E
a
c
h
 
s
e
r
v
e
r
s
 
s
u
c
c
e
s
s
o
r
 
l
i
s
t
 
h
a
s
 
2
0
 
e
n
t
r
i
e
s
W
a
i
t
 
u
n
t
i
l
 
t
h
e
y
 
s
t
a
b
i
l
i
z
e
Insert 1,000 key/value pairs
F
i
v
e
 
r
e
p
l
i
c
a
s
 
o
f
 
e
a
c
h
S
t
o
p
 
X
%
 
o
f
 
t
h
e
 
s
e
r
v
e
r
s
,
 
i
m
m
e
d
i
a
t
e
l
y
 
m
a
k
e
 
1
,
0
0
0
 
l
o
o
k
u
p
s
49
49
M
a
s
s
i
v
e
 
f
a
i
l
u
r
e
s
 
h
a
v
e
 
l
i
t
t
l
e
 
i
m
p
a
c
t
F
a
i
l
e
d
 
L
o
o
k
u
p
s
 
(
P
e
r
c
e
n
t
)
F
a
i
l
e
d
 
N
o
d
e
s
 
(
P
e
r
c
e
n
t
)
(
1
/
2
)
6
 
i
s
 
1
.
6
%
1.
Peer-to-Peer Systems
2.
Distributed Hash Tables
3.
The Chord Lookup Service
Basic design
Integration with 
DHash
 DHT, performance
4.
C
o
n
c
l
u
d
i
n
g
 
t
h
o
u
g
h
t
s
 
o
n
 
D
H
T
,
 
P
2
P
50
50
T
o
d
a
y
Original DHTs (CAN, Chord, Kademlia, Pastry, Tapestry) proposed in
2001-02
Next 5-6 years saw proliferation of DHT-based apps:
Filesystems (e.g., CFS, Ivy, OceanStore, Pond, PAST)
Naming systems (e.g., SFR, Beehive)
DB query processing [PIER, Wisc]
Content distribution systems (e.g., CoralCDN)
distributed databases (e.g., PIER)
51
51
D
H
T
s
:
 
I
m
p
a
c
t
W
h
y
 
d
o
n
t
 
a
l
l
 
s
e
r
v
i
c
e
s
 
u
s
e
 
P
2
P
?
1.
H
i
g
h
 
l
a
t
e
n
c
y
 
a
n
d
 
l
i
m
i
t
e
d
 
b
a
n
d
w
i
d
t
h
 
b
e
t
w
e
e
n
 
p
e
e
r
s
(
v
s
.
 
i
n
t
r
a
/
i
n
t
e
r
-
d
a
t
a
c
e
n
t
e
r
)
2.
U
s
e
r
 
c
o
m
p
u
t
e
r
s
 
a
r
e
 
l
e
s
s
 
r
e
l
i
a
b
l
e
 
t
h
a
n
 
m
a
n
a
g
e
d
 
s
e
r
v
e
r
s
3.
L
a
c
k
 
o
f
 
t
r
u
s
t
 
i
n
 
p
e
e
r
s
 
c
o
r
r
e
c
t
 
b
e
h
a
v
i
o
r
Securing DHT routing hard, unsolved in practice
52
52
Seem promising for finding data in large P2P systems
Decentralization seems good for load, fault tolerance
B
u
t
:
 
t
h
e
 
s
e
c
u
r
i
t
y
 
p
r
o
b
l
e
m
s
 
a
r
e
 
d
i
f
f
i
c
u
l
t
B
u
t
:
 
c
h
u
r
n
 
i
s
 
a
 
p
r
o
b
l
e
m
,
 
p
a
r
t
i
c
u
l
a
r
l
y
 
i
f
 
l
o
g
(
n
)
 
i
s
 
b
i
g
A
n
d
:
 
c
l
o
u
d
 
c
o
m
p
u
t
i
n
g
 
s
o
l
v
e
d
 
m
a
n
y
 
e
c
o
n
o
m
i
c
s
r
e
a
s
o
n
s
,
 
a
s
 
d
i
d
 
r
i
s
e
 
o
f
 
a
d
-
b
a
s
e
d
 
b
u
s
i
n
e
s
s
 
m
o
d
e
l
s
DHTs have not had the hoped-for impact
53
53
D
H
T
s
 
i
n
 
r
e
t
r
o
s
p
e
c
t
i
v
e
C
o
n
s
i
s
t
e
n
t
 
h
a
s
h
i
n
g
Elegant way to divide a workload across machines
V
e
r
y
 
u
s
e
f
u
l
 
i
n
 
c
l
u
s
t
e
r
s
:
 
a
c
t
i
v
e
l
y
 
u
s
e
d
 
t
o
d
a
y
 
i
n
 
A
m
a
z
o
n
D
y
n
a
m
o
 
a
n
d
 
o
t
h
e
r
 
s
y
s
t
e
m
s
R
e
p
l
i
c
a
t
i
o
n
 
f
o
r
 
h
i
g
h
 
a
v
a
i
l
a
b
i
l
i
t
y
,
 
e
f
f
i
c
i
e
n
t
 
r
e
c
o
v
e
r
y
I
n
c
r
e
m
e
n
t
a
l
 
s
c
a
l
a
b
i
l
i
t
y
S
e
l
f
-
m
a
n
a
g
e
m
e
n
t
:
 
m
i
n
i
m
a
l
 
c
o
n
f
i
g
u
r
a
t
i
o
n
U
n
i
q
u
e
 
t
r
a
i
t
:
 
n
o
 
s
i
n
g
l
e
 
s
e
r
v
e
r
 
t
o
 
s
h
u
t
 
d
o
w
n
/
m
o
n
i
t
o
r
54
54
W
h
a
t
 
D
H
T
s
 
g
o
t
 
r
i
g
h
t
Slide Note
Embed
Share

Explore the world of Peer-to-Peer Systems and Distributed Hash Tables through a lecture by Mike Freedman, covering topics like Napster, Gnutella, BitTorrent, and the challenges they present.

  • Peer-to-Peer Systems
  • Distributed Hash Tables
  • Napster
  • Gnutella
  • BitTorrent

Uploaded on Oct 02, 2024 | 1 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.

E N D

Presentation Transcript


  1. Peer-to-Peer Systems and Distributed Hash Tables COS 418: Advanced Computer Systems Lecture 8 Mike Freedman [Credit: Slides Adapted from Kyle Jamieson and Daniel Suo]

  2. Today 1. Peer-to-Peer Systems Napster, Gnutella, BitTorrent, challenges 2. Distributed Hash Tables 3. The Chord Lookup Service 4. Concluding thoughts on DHTs, P2P 2

  3. What is a Peer-to-Peer (P2P) system? Node Node Node Internet Node Node A distributed system architecture: No centralized control Nodes are roughly symmetric in function Large number of unreliable nodes 3

  4. Why might P2P be a win? High capacity for services through parallelism: Many disks Many network connections Many CPUs Absence of a centralized server may mean: Less chance of service overload as load increases Easier deployment A single failure won t wreck the whole system System as a whole is harder to attack 4

  5. P2P adoption Successful adoption in some niche areas 1. Client-to-client (legal, illegal) file sharing 2. Digital currency: no natural single owner (Bitcoin) 3. Voice/video telephony: user to user anyway Issues: Privacy and control 5

  6. Example: Classic BitTorrent 1. User clicks on download link Gets torrent file with content hash, IP addr of tracker 2. User s BitTorrent (BT) client talks to tracker Tracker tells it list of peers who have file 3. User s BT client downloads file from peers 4. User s BT client tells tracker it has a copy now, too 5. User s BT client serves the file to others for a while Provides huge download bandwidth, without expensive server or network links 6

  7. The lookup problem get( Pacific Rim.mp4 ) N2 N3 Client ? N1 Internet Publisher (N4) N6 N5 put( Pacific Rim.mp4 , [content]) 7

  8. Centralized lookup (Napster) N2 N3 Client N1 Lookup( Pacific Rim.mp4 ) SetLoc( Pacific Rim.mp4 , IP address of N4) DB Simple, but O(N) state and a single point of failure Publisher (N4) N6 N5 key= Pacific Rim.mp4 , value=[content] 8

  9. Flooded queries (original Gnutella) Lookup( Pacific Rim.mp4 ) N2 N3 Client N1 Robust, but O(N = number of peers) messages per lookup Publisher (N4) key= Star Wars.mov , value=[content] N6 N5 9

  10. Routed DHT queries (Chord) Lookup(H(data)) N2 N3 Client N1 Publisher (N4) key= H(audio data) , value=[content] state, reasonable number of hops? N6 N5 Can we make it robust, reasonable 10

  11. Today 1. Peer-to-Peer Systems 2. Distributed Hash Tables 3. The Chord Lookup Service 4. Concluding thoughts on DHTs, P2P 11

  12. What is a DHT (and why)? Localhash table: key = Hash(name) put(key, value) get(key) value Service: Constant-time insertion and lookup How can I do (roughly) this across millions of hosts on the Internet? Distributed Hash Table (DHT) 12

  13. What is a DHT (and why)? Distributed Hash Table: key = hash(data) lookup(key) IP addr(Chord lookup service) send-RPC(IP address, put, key, data) send-RPC(IP address, get, key) data Partitioning data in large-scale distributed systems Tuples in a global database engine Data blocks in a global file system Files in a P2P file-sharing system 13

  14. Cooperative storage with a DHT Distributed application data get (key) put(key, data) (DHash) Distributed hash table lookup(key) node IP address (Chord) Lookup service . node node node App may be distributed over many nodes DHT distributes data storage over many nodes 14

  15. BitTorrent over DHT BitTorrent can use DHT instead of (or with) a tracker BitTorrent clients use DHT: Key = file content hash ( infohash ) Value = IP addressofpeer willing to serve file Can store multiple values (i.e. IP addresses) for a key Client does: get(infohash) to find other clients willing to serve put(infohash, my-ipaddr) to identify itself as willing 15

  16. Why the put/get DHT interface? API supports a wide range of applications DHT imposes no structure/meaning on keys Key/value pairs are persistent and global Can store keys in other DHT values And thus build complex data structures 16

  17. Why might DHT design be hard? Decentralized: no central authority Scalable: low network traffic overhead Efficient: find items quickly (latency) Dynamic: nodes fail, new nodes join 17

  18. Today 1. Peer-to-Peer Systems 2. Distributed Hash Tables 3. The Chord Lookup Service Basic design Integration with DHash DHT, performance 18

  19. Chord lookup algorithm properties Interface: lookup(key) IP address Efficient: O(log N) messages per lookup N is the total number of servers Scalable: O(log N) state per node Robust: survives massive failures Simple to analyze 19

  20. Chord identifiers Key identifier = SHA-1(key) Node identifier = SHA-1(IP address) SHA-1 distributes both uniformly How does Chord partition data? i.e., map key IDs to node IDs 20

  21. Consistent hashing [Karger 97] Key 5 K5 K20 N105 Node 105 Circular 7-bit ID space N32 N90 K80 Key is stored at its successor: node with next-higher ID 21

  22. Chord: Successor pointers N120 N10 N105 N32 K80 N90 N60 22

  23. Basic lookup N120 N10 Where is K80? N105 N32 K80 N90 N60 23

  24. Simple lookup algorithm Lookup(key-id) succ my successor if my-id < succ < key-id // next hop call Lookup(key-id) on succ else return succ // done Correctness depends only on successors 24

  25. Improving performance Problem: Forwarding through successor is slow Data structure is a linked list: O(n) Idea: Can we make it more like a binary search? Need to be able to halve distance at each step 25

  26. Finger table allows log N-time lookups 1/8 1/16 1/32 1/64 N80 26

  27. Finger i Points to Successor of n+2i N120 K112 1/8 1/16 1/32 1/64 N80 27

  28. Implication of finger tables A binary lookup tree rooted at every node Threaded through other nodes' finger tables Better than arranging nodes in a single tree Every node acts as a root So there's no root hotspot No single point of failure But a lot more state in total 28

  29. Lookup with finger table Lookup(key-id) look in local finger table for highest n: my-id < n < key-id if n exists call Lookup(key-id) on node n // next hop else return my successor// done 29

  30. Lookups Take O(log N) Hops N5 N10 K19 N110 N20 N99 Lookup(K19) N32 N80 N60 30

  31. An aside: Is log(n) fast or slow? For a million nodes, it s 20 hops If each hop takes 50ms, lookups take a second If each hop has 10% chance of failure, it s a couple of timeouts So in practice log(n) is better than O(n) but not great 31

  32. Joining: Linked list insert N25 N36 1. Lookup(36) K30 K38 N40 32

  33. Join (2) N25 2. N36 sets its own successor pointer N36 K30 K38 N40 33

  34. Join (3) N25 3. Copy keys 26..36 from N40 to N36 N36 K30 K30 K38 N40 34

  35. Notify maintains predecessors N25 notifyN25 N36 N40 notifyN36 35

  36. Stabilize message fixes successor N25 stabilize N36 N40 My predecessor is N36. 36

  37. Joining: Summary N25 N36 K30 K30 K38 N40 Predecessor pointer allows link to new node Update finger pointers in the background Correct successors produce correct lookups 37

  38. Failures may cause incorrect lookup N120 N10 N113 N102 N85 Lookup(K90) N80 N80 does not know correct successor, so incorrect lookup 38

  39. Successor lists Each node stores a list of its rimmediate successors After failure, will know first live successor Correct successors guarantee correct lookups Guarantee is with some probability 39

  40. Today 1. Peer-to-Peer Systems 2. Distributed Hash Tables 3. The Chord Lookup Service Basic design Integration with DHash DHT, performance 42

  41. The DHash DHT Builds key/value storage on Chord Replicates blocks for availability Stores kreplicas at the ksuccessors after the block on the Chord ring Caches blocks for load balancing Client sends copy of block to each server it contacted along lookup path Authenticates block contents 43

  42. DHash data authentication Two types of DHash blocks: Content-hash: key = SHA-1(data) Public-key: Data signed by corresponding private key Chord File System example: 44

  43. DHash replicates blocks at r successors N5 N110 N10 N20 N99 Block 17 N40 N50 N80 N68 N60 Replicas are easy to find if successor fails Hashed node IDs ensure independent failure 45

  44. Today 1. Peer-to-Peer Systems 2. Distributed Hash Tables 3. The Chord Lookup Service Basic design Integration with DHash DHT, performance 4. Concluding thoughts on DHT, P2P 50

  45. DHTs: Impact Original DHTs (CAN, Chord, Kademlia, Pastry, Tapestry) proposed in 2001-02 Next 5-6 years saw proliferation of DHT-based apps: Filesystems (e.g., CFS, Ivy, OceanStore, Pond, PAST) Naming systems (e.g., SFR, Beehive) DB query processing [PIER, Wisc] Content distribution systems (e.g., CoralCDN) distributed databases (e.g., PIER) 51

  46. Why dont all services use P2P? 1. High latency and limited bandwidth between peers (vs. intra/inter-datacenter) 2. User computers are less reliable than managed servers 3. Lack of trust in peers correct behavior Securing DHT routing hard, unsolved in practice 52

  47. DHTs in retrospective Seem promising for finding data in large P2P systems Decentralization seems good for load, fault tolerance But: the security problems are difficult But: churn is a problem, particularly if log(n) is big And: cloud computing solved many economics reasons, as did rise of ad-based business models DHTs have not had the hoped-for impact 53

  48. What DHTs got right Consistent hashing Elegant way to divide a workload across machines Very useful in clusters: actively used today in Amazon Dynamo and other systems Replication for high availability, efficient recovery Incremental scalability Self-management: minimal configuration Unique trait: no single server to shut down/monitor 54

Related


More Related Content

giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#