State Machine Replication in Distributed Systems Using RAFT

 
Consensus 2
Replicated State Machines, RAFT
 
COS 418: 
Distributed Systems
Lecture 8
 
Michael Freedman
 
RAFT slides heavily based on those from Diego Ongaro and John Ousterhout
 
M
e
c
h
a
n
i
s
m
:
 
 
R
e
p
l
i
c
a
t
e
 
a
n
d
 
s
e
p
a
r
a
t
e
 
s
e
r
v
e
r
s
G
o
a
l
 
#
1
:
 
 
P
r
o
v
i
d
e
 
a
 
h
i
g
h
l
y
 
r
e
l
i
a
b
l
e
 
s
e
r
v
i
c
e
G
o
a
l
 
#
2
:
 
 
S
e
r
v
e
r
s
 
s
h
o
u
l
d
 
b
e
h
a
v
e
 
j
u
s
t
 
l
i
k
e
 
a
s
i
n
g
l
e
,
 
m
o
r
e
 
r
e
l
i
a
b
l
e
 
s
e
r
v
e
r
 
R
e
c
a
l
l
:
 
 
P
r
i
m
a
r
y
-
B
a
c
k
u
p
 
2
 
A
n
y
 
s
e
r
v
e
r
 
i
s
 
e
s
s
e
n
t
i
a
l
l
y
 
a
 
s
t
a
t
e
 
m
a
c
h
i
n
e
O
p
e
r
a
t
i
o
n
s
 
t
r
a
n
s
i
t
i
o
n
 
b
e
t
w
e
e
n
 
s
t
a
t
e
s
 
Need an op to be executed on all replicas, or none at all
i
.
e
.
,
 
w
e
 
n
e
e
d
 
d
i
s
t
r
i
b
u
t
e
d
 
a
l
l
-
o
r
-
n
o
t
h
i
n
g
 
a
t
o
m
i
c
i
t
y
If op is deterministic, replicas will end in same state
 
S
t
a
t
e
 
m
a
c
h
i
n
e
 
r
e
p
l
i
c
a
t
i
o
n
 
3
E
x
t
e
n
d
 
P
B
 
f
o
r
 
h
i
g
h
 
a
v
a
i
l
a
b
i
l
i
t
y
C
l
i
e
n
t
 
C
P
r
i
m
a
r
y
 
P
B
a
c
k
u
p
A
 
Primary gets ops, orders into log
Replicates log of ops to backup
Backup executes ops in same order
Backup takes over if primary fails
 
But what if network partition rather
than primary failure?
“View” server to determine primary
But what if view server fails?
“View” determined via consensus!
4
E
x
t
e
n
d
 
P
B
 
f
o
r
 
h
i
g
h
 
a
v
a
i
l
a
b
i
l
i
t
y
C
l
i
e
n
t
 
C
P
r
i
m
a
r
y
 
P
B
a
c
k
u
p
A
B
 
1.
C
 
 
P
:
 
r
e
q
u
e
s
t
 
<
o
p
>
2.
P
 
 
A
,
 
B
:
 
p
r
e
p
a
r
e
 
<
o
p
>
3.
A
,
 
B
 
 
P
:
 
p
r
e
p
a
r
e
d
 
o
r
 
e
r
r
o
r
4.
P
 
 
C
:
 
r
e
s
u
l
t
 
e
x
e
c
<
o
p
>
 
o
r
 
f
a
i
l
e
d
5.
P
 
 
A
,
 
B
:
 
c
o
m
m
i
t
 
<
o
p
>
“Okay” (i.e., op is stable) if
written to > ½ backups
5
 
2
P
C
 
f
r
o
m
 
p
r
i
m
a
r
y
 
t
o
 
b
a
c
k
u
p
s
 
C
l
i
e
n
t
 
C
 
P
r
i
m
a
r
y
 
P
 
B
a
c
k
u
p
 
A
 
B
 
1.
C
 
 
P
:
 
r
e
q
u
e
s
t
 
<
o
p
>
 
2.
P
 
 
A
,
 
B
:
 
p
r
e
p
a
r
e
 
<
o
p
>
 
3.
A
,
 
B
 
 
P
:
 
p
r
e
p
a
r
e
d
 
o
r
 
e
r
r
o
r
 
4.
P
 
 
C
:
 
r
e
s
u
l
t
 
e
x
e
c
<
o
p
>
 
o
r
 
f
a
i
l
e
d
 
5.
P
 
 
A
,
 
B
:
 
c
o
m
m
i
t
 
<
o
p
>
“Okay” (i.e., op is stable) if
written to > ½ backups
Expect success as replicas
are all identical
(unlike distributed txn)
 
6
V
i
e
w
 
c
h
a
n
g
e
s
 
o
n
 
f
a
i
l
u
r
e
P
r
i
m
a
r
y
 
P
B
a
c
k
u
p
A
B
 
1.
Backups monitor primary
2.
If a backup thinks primary failed,
initiate 
View Change 
(leader election)
7
V
i
e
w
 
c
h
a
n
g
e
s
 
o
n
 
f
a
i
l
u
r
e
P
r
i
m
a
r
y
 
 
P
B
a
c
k
u
p
A
 
1.
Backups monitor primary
 
2.
If a backup thinks primary failed,
initiate 
View Change 
(leader election)
 
3.
Inituitive safety argument:
View change requires 
f+1 
agreement
Op committed once written to 
f+1
 nodes
At least one node both saw write and in
new view
 
4.
More advanced:  Adding or removing
nodes (“reconfiguration”)
R
e
q
u
i
r
e
s
 
2
f
 
+
 
1
 
n
o
d
e
s
t
o
 
h
a
n
d
l
e
 
f
 
 
f
a
i
l
u
r
e
s
8
 
B
a
s
i
c
 
f
a
u
l
t
-
t
o
l
e
r
a
n
t
R
e
p
l
i
c
a
t
e
d
 
S
t
a
t
e
 
M
a
c
h
i
n
e
 
(
R
S
M
)
a
p
p
r
o
a
c
h
 
1.
Consensus protocol to elect leader
2.
2PC to replicate operations from leader
3.
All replicas execute ops once committed
 
9
 
W
h
y
 
b
o
t
h
e
r
 
w
i
t
h
 
a
 
l
e
a
d
e
r
?
 
Not necessary, but …
Decomposition:  normal operation vs. leader changes
Simplifies normal operation (no conflicts)
More efficient than leader-less approaches
Obvious place to handle non-determinism
 
10
10
 
R
a
f
t
:
 
A
 
C
o
n
s
e
n
s
u
s
 
A
l
g
o
r
i
t
h
m
f
o
r
 
R
e
p
l
i
c
a
t
e
d
 
L
o
g
s
 
Diego Ongaro and John Ousterhout
Stanford University
 
11
11
Replicated log => replicated state machine
All servers execute same commands in same order
Consensus module ensures proper log replication
G
o
a
l
:
 
R
e
p
l
i
c
a
t
e
d
 
L
o
g
L
o
g
C
o
n
s
e
n
s
u
s
M
o
d
u
l
e
S
t
a
t
e
M
a
c
h
i
n
e
L
o
g
C
o
n
s
e
n
s
u
s
M
o
d
u
l
e
S
t
a
t
e
M
a
c
h
i
n
e
L
o
g
C
o
n
s
e
n
s
u
s
M
o
d
u
l
e
S
t
a
t
e
M
a
c
h
i
n
e
S
e
r
v
e
r
s
C
l
i
e
n
t
s
shl
12
12
undefined
 
1.
L
e
a
d
e
r
 
e
l
e
c
t
i
o
n
2.
N
o
r
m
a
l
 
o
p
e
r
a
t
i
o
n
 
(
b
a
s
i
c
 
l
o
g
 
r
e
p
l
i
c
a
t
i
o
n
)
3.
S
a
f
e
t
y
 
a
n
d
 
c
o
n
s
i
s
t
e
n
c
y
 
a
f
t
e
r
 
l
e
a
d
e
r
 
c
h
a
n
g
e
s
4.
N
e
u
t
r
a
l
i
z
i
n
g
 
o
l
d
 
l
e
a
d
e
r
s
5.
C
l
i
e
n
t
 
i
n
t
e
r
a
c
t
i
o
n
s
6.
R
e
c
o
n
f
i
g
u
r
a
t
i
o
n
 
R
a
f
t
 
O
v
e
r
v
i
e
w
 
13
13
undefined
 
At any given time, each server is either:
Leader
: handles all client interactions, log replication
Follower
: completely passive
Candidate
: used to elect a new leader
Normal operation: 1 leader, N-1 followers
Follower
Candidate
Leader
 
S
e
r
v
e
r
 
S
t
a
t
e
s
 
14
14
undefined
 
Servers start as followers
Leaders send 
heartbeats
 (empty AppendEntries RPCs) to
maintain authority
If 
electionTimeout 
elapses with no RPCs (100-500ms),
follower assumes leader has crashed and starts new election
Follower
Candidate
Leader
start
timeout,
start election
receive votes from
majority of servers
timeout,
new election
L
i
v
e
n
e
s
s
 
V
a
l
i
d
a
t
i
o
n
15
15
undefined
 
Time divided into terms
Election (either failed or resulted in 1 leader)
Normal operation under a single leader
Each server maintains 
current term 
value
Key role of terms: identify obsolete information
Term 1
Term 2
Term 3
Term 4
Term 5
time
Elections
Normal Operation
Split Vote
T
e
r
m
s
 
(
a
k
a
 
e
p
o
c
h
s
)
16
16
undefined
 
S
t
a
r
t
 
e
l
e
c
t
i
o
n
:
Increment current term, change to candidate state, vote for self
S
e
n
d
 
R
e
q
u
e
s
t
V
o
t
e
 
t
o
 
a
l
l
 
o
t
h
e
r
 
s
e
r
v
e
r
s
,
 
r
e
t
r
y
 
u
n
t
i
l
 
e
i
t
h
e
r
:
1.
Receive votes from majority of servers:
Become leader
Send AppendEntries heartbeats to all other servers
2.
Receive RPC from valid leader:
Return to follower state
3.
No-one wins election (election timeout elapses):
Increment term, start new election
E
l
e
c
t
i
o
n
s
17
17
undefined
 
S
a
f
e
t
y
:
 
 
a
l
l
o
w
 
a
t
 
m
o
s
t
 
o
n
e
 
w
i
n
n
e
r
 
p
e
r
 
t
e
r
m
Each server votes only once per term (persists on disk)
Two different candidates can’t get majorities in same term
 
 
 
L
i
v
e
n
e
s
s
:
 
s
o
m
e
 
c
a
n
d
i
d
a
t
e
 
m
u
s
t
 
e
v
e
n
t
u
a
l
l
y
 
w
i
n
Each choose election timeouts randomly in [T, 2T]
One usually initiates and wins election before others start
Works well if T >> network RTT
Servers
Voted for
candidate A
B can’t also
get majority
E
l
e
c
t
i
o
n
s
18
18
undefined
 
Log entry = < index, term, command >
Log stored on stable storage (disk); survives crashes
Entry 
committed
 if known to be stored on majority of servers
Durable / stable, will eventually be executed by state machines
1
add
1
2
3
4
5
6
7
8
3
jmp
1
cmp
1
ret
2
mov
3
div
3
shl
3
sub
1
add
3
jmp
1
cmp
1
ret
2
mov
1
add
3
jmp
1
cmp
1
ret
2
mov
3
div
3
shl
3
sub
1
add
1
cmp
1
add
3
jmp
1
cmp
1
ret
2
mov
3
div
3
shl
 
leader
log index
 
followers
 
committed entries
term
command
19
19
L
o
g
 
S
t
r
u
c
t
u
r
e
undefined
 
Client sends command to leader
Leader appends command to its log
Leader sends AppendEntries RPCs to followers
Once new entry committed:
Leader passes command to its state machine, sends result to client
Leader piggybacks commitment to followers in later AppendEntries
Followers pass committed commands to their state machines
20
20
N
o
r
m
a
l
 
o
p
e
r
a
t
i
o
n
L
o
g
C
o
n
s
e
n
s
u
s
M
o
d
u
l
e
S
t
a
t
e
M
a
c
h
i
n
e
L
o
g
C
o
n
s
e
n
s
u
s
M
o
d
u
l
e
S
t
a
t
e
M
a
c
h
i
n
e
L
o
g
C
o
n
s
e
n
s
u
s
M
o
d
u
l
e
S
t
a
t
e
M
a
c
h
i
n
e
shl
undefined
 
Crashed / slow followers?
Leader retries RPCs until they succeed
 
Performance is optimal in common case:
One successful RPC to any majority of servers
 
21
21
 
N
o
r
m
a
l
 
o
p
e
r
a
t
i
o
n
 
L
o
g
 
C
o
n
s
e
n
s
u
s
M
o
d
u
l
e
 
S
t
a
t
e
M
a
c
h
i
n
e
 
L
o
g
 
C
o
n
s
e
n
s
u
s
M
o
d
u
l
e
 
S
t
a
t
e
M
a
c
h
i
n
e
 
L
o
g
 
C
o
n
s
e
n
s
u
s
M
o
d
u
l
e
 
S
t
a
t
e
M
a
c
h
i
n
e
 
shl
undefined
 
If log entries on different server have same index and term:
Store the same command
Logs are identical in all preceding entries
If given entry is committed, all preceding also committed
 
22
22
 
L
o
g
 
O
p
e
r
a
t
i
o
n
:
 
 
H
i
g
h
l
y
 
C
o
h
e
r
e
n
t
 
server1
 
server2
undefined
 
AppendEntries has <index,term> of entry preceding new ones
Follower must contain matching entry; otherwise it rejects
Implements an 
induction step
, ensures coherency
23
23
L
o
g
 
O
p
e
r
a
t
i
o
n
:
 
 
C
o
n
s
i
s
t
e
n
c
y
 
C
h
e
c
k
1
add
3
jmp
1
cmp
1
ret
2
mov
1
add
1
cmp
1
ret
2
mov
leader
follower
1
2
3
4
5
 
AppendEntries succeeds:
matching entry
 
AppendEntries fails:
mismatch
undefined
 
New leader’s log is truth, no special steps, start normal operation
Will eventually make follower’s logs identical to leader’s
Old leader may have left entries partially replicated
Multiple crashes can leave many extraneous log entries
24
24
L
e
a
d
e
r
 
C
h
a
n
g
e
s
undefined
 
R
a
f
t
 
s
a
f
e
t
y
 
p
r
o
p
e
r
t
y
:
 
 
I
f
 
l
e
a
d
e
r
 
h
a
s
 
d
e
c
i
d
e
d
 
l
o
g
 
e
n
t
r
y
 
i
s
c
o
m
m
i
t
t
e
d
,
 
e
n
t
r
y
 
w
i
l
l
 
b
e
 
p
r
e
s
e
n
t
 
i
n
 
l
o
g
s
 
o
f
 
a
l
l
 
f
u
t
u
r
e
 
l
e
a
d
e
r
s
Why does this guarantee higher-level goal?
1.
Leaders never overwrite entries in their logs
2.
Only entries in leader’s log can be committed
3.
Entries must be committed before applying to state machine
 
C
o
m
m
i
t
t
e
d
 
 
P
r
e
s
e
n
t
 
i
n
 
f
u
t
u
r
e
 
l
e
a
d
e
r
s
 
l
o
g
s
 
Restrictions on
commitment
 
Restrictions on
leader election
25
25
S
a
f
e
t
y
 
R
e
q
u
i
r
e
m
e
n
t
O
n
c
e
 
l
o
g
 
e
n
t
r
y
 
a
p
p
l
i
e
d
 
t
o
 
a
 
s
t
a
t
e
 
m
a
c
h
i
n
e
,
 
n
o
 
o
t
h
e
r
 
s
t
a
t
e
m
a
c
h
i
n
e
 
m
u
s
t
 
a
p
p
l
y
 
a
 
d
i
f
f
e
r
e
n
t
 
v
a
l
u
e
 
f
o
r
 
t
h
a
t
 
l
o
g
 
e
n
t
r
y
undefined
 
Elect candidate most likely to contain all committed entries
In RequestVote, candidates incl. index + term of last log entry
Voter V denies vote if its log is “more complete”:
(newer term) or (entry in higher index of same term)
Leader will have “most complete” log among electing majority
26
26
P
i
c
k
i
n
g
 
t
h
e
 
B
e
s
t
 
L
e
a
d
e
r
1
2
1
1
2
1
2
3
4
5
1
2
1
1
1
2
1
1
2
 
Can’t tell
which entries
committed!
s
1
s
2
undefined
 
C
a
s
e
 
#
1
:
 
L
e
a
d
e
r
 
d
e
c
i
d
e
s
 
e
n
t
r
y
 
i
n
 
c
u
r
r
e
n
t
 
t
e
r
m
 
i
s
 
c
o
m
m
i
t
t
e
d
Safe: 
leader for term 3 must contain entry 4
27
27
C
o
m
m
i
t
t
i
n
g
 
E
n
t
r
y
 
f
r
o
m
 
C
u
r
r
e
n
t
 
T
e
r
m
1
2
3
4
5
1
1
1
1
1
1
1
2
1
1
1
s
1
s
2
s
3
s
4
s
5
2
2
2
2
2
2
2
undefined
 
C
a
s
e
 
#
2
:
 
L
e
a
d
e
r
 
t
r
y
i
n
g
 
t
o
 
f
i
n
i
s
h
 
c
o
m
m
i
t
t
i
n
g
 
e
n
t
r
y
 
f
r
o
m
 
e
a
r
l
i
e
r
Entry 3 
not safely committed
:
s
5
 can be elected as leader for term 5
If elected, it will overwrite entry 3 on s
1
, s
2
, and s
3
28
28
C
o
m
m
i
t
t
i
n
g
 
E
n
t
r
y
 
f
r
o
m
 
E
a
r
l
i
e
r
 
T
e
r
m
1
2
3
4
5
1
1
1
1
1
1
1
2
1
1
1
s
1
s
2
s
3
s
4
s
5
2
2
3
4
3
3
undefined
 
F
o
r
 
l
e
a
d
e
r
 
t
o
 
d
e
c
i
d
e
 
e
n
t
r
y
 
i
s
 
c
o
m
m
i
t
t
e
d
:
1.
Entry stored on a majority
2.
≥ 1 new entry from leader’s term also on majority
Example;   Once e4 committed, s
5
 cannot be elected leader
for term 5, and e3 and e4 both safe
C
o
m
b
i
n
a
t
i
o
n
 
o
f
 
e
l
e
c
t
i
o
n
 
r
u
l
e
s
 
a
n
d
 
c
o
m
m
i
t
m
e
n
t
 
r
u
l
e
s
m
a
k
e
s
 
R
a
f
t
 
s
a
f
e
29
29
N
e
w
 
C
o
m
m
i
t
m
e
n
t
 
R
u
l
e
s
L
e
a
d
e
r
 
f
o
r
 
t
e
r
m
 
4
undefined
Leader changes can result in log inconsistencies
30
30
C
h
a
l
l
e
n
g
e
:
 
 
L
o
g
 
I
n
c
o
n
s
i
s
t
e
n
c
i
e
s
1
4
1
1
4
5
5
6
6
6
L
e
a
d
e
r
 
f
o
r
 
t
e
r
m
 
8
1
4
1
1
4
5
5
6
6
1
4
1
1
1
4
1
1
4
5
5
6
6
6
6
1
4
1
1
4
5
5
6
6
6
1
4
1
1
4
1
1
1
P
o
s
s
i
b
l
e
f
o
l
l
o
w
e
r
s
4
4
7
7
2
2
3
3
3
3
3
2
(a)
(b)
(c)
(d)
(e)
(f)
1
2
3
4
5
6
7
8
9
10
11
12
undefined
R
e
p
a
i
r
i
n
g
 
F
o
l
l
o
w
e
r
 
L
o
g
s
1
4
1
1
1
1
1
F
o
l
l
o
w
e
r
s
2
2
3
3
3
3
3
2
(a)
(b)
 
n
e
x
t
I
n
d
e
x
 
N
e
w
 
l
e
a
d
e
r
 
m
u
s
t
 
m
a
k
e
 
f
o
l
l
o
w
e
r
 
l
o
g
s
 
c
o
n
s
i
s
t
e
n
t
 
w
i
t
h
 
i
t
s
 
o
w
n
Delete extraneous entries
Fill in missing entries
L
e
a
d
e
r
 
k
e
e
p
s
 
n
e
x
t
I
n
d
e
x
 
f
o
r
 
e
a
c
h
 
f
o
l
l
o
w
e
r
:
Index of next log entry to send to that follower
Initialized to (1 + leader’s last index)
If AppendEntries consistency check fails, decrement nextIndex, try again
undefined
 
R
e
p
a
i
r
i
n
g
 
F
o
l
l
o
w
e
r
 
L
o
g
s
1
4
1
1
1
1
1
 
Before repair
2
2
3
3
3
3
3
2
 
(a)
 
(f)
1
1
1
4
 
(f)
 
n
e
x
t
I
n
d
e
x
 
After repair
undefined
 
L
e
a
d
e
r
 
t
e
m
p
o
r
a
r
i
l
y
 
d
i
s
c
o
n
n
e
c
t
e
d
 other servers elect new leader
→ old leader reconnected
→ old leader attempts to commit log entries
T
e
r
m
s
 
u
s
e
d
 
t
o
 
d
e
t
e
c
t
 
s
t
a
l
e
 
l
e
a
d
e
r
s
 
(
a
n
d
 
c
a
n
d
i
d
a
t
e
s
)
Every RPC contains term of sender
Sender’s term < receiver:
Receiver: Rejects RPC (via ACK which sender processes…)
Receiver’s term < sender:
Receiver reverts to follower, updates term, processes RPC
E
l
e
c
t
i
o
n
 
u
p
d
a
t
e
s
 
t
e
r
m
s
 
o
f
 
m
a
j
o
r
i
t
y
 
o
f
 
s
e
r
v
e
r
s
Deposed server cannot commit new log entries
33
33
N
e
u
t
r
a
l
i
z
i
n
g
 
O
l
d
 
L
e
a
d
e
r
s
undefined
 
S
e
n
d
 
c
o
m
m
a
n
d
s
 
t
o
 
l
e
a
d
e
r
If leader unknown, contact any server, which redirects client to leader
L
e
a
d
e
r
 
o
n
l
y
 
r
e
s
p
o
n
d
s
 
a
f
t
e
r
 
c
o
m
m
a
n
d
 
l
o
g
g
e
d
,
c
o
m
m
i
t
t
e
d
,
 
a
n
d
 
e
x
e
c
u
t
e
d
 
b
y
 
l
e
a
d
e
r
I
f
 
r
e
q
u
e
s
t
 
t
i
m
e
s
 
o
u
t
 
(
e
.
g
.
,
 
l
e
a
d
e
r
 
c
r
a
s
h
e
s
)
:
Client reissues command to new leader (after possible redirect)
E
n
s
u
r
e
 
e
x
a
c
t
l
y
-
o
n
c
e
 
s
e
m
a
n
t
i
c
s
 
e
v
e
n
 
w
i
t
h
 
l
e
a
d
e
r
 
f
a
i
l
u
r
e
s
E.g., Leader can execute command then crash before responding
Client should embed unique ID in each command
This client ID included in log entry
Before accepting request, leader checks log for entry with same id
34
34
C
l
i
e
n
t
 
P
r
o
t
o
c
o
l
 
R
e
c
o
n
f
i
g
u
r
a
t
i
o
n
 
35
35
undefined
 
V
i
e
w
 
c
o
n
f
i
g
u
r
a
t
i
o
n
:
 
 
{
 
l
e
a
d
e
r
,
 
{
 
m
e
m
b
e
r
s
 
}
,
 
s
e
t
t
i
n
g
s
 
}
C
o
n
s
e
n
s
u
s
 
m
u
s
t
 
s
u
p
p
o
r
t
 
c
h
a
n
g
e
s
 
t
o
 
c
o
n
f
i
g
u
r
a
t
i
o
n
Replace failed machine
Change degree of replication
C
a
n
n
o
t
 
s
w
i
t
c
h
 
d
i
r
e
c
t
l
y
 
f
r
o
m
 
o
n
e
 
c
o
n
f
i
g
 
t
o
 
a
n
o
t
h
e
r
:
c
o
n
f
l
i
c
t
i
n
g
 
m
a
j
o
r
i
t
i
e
s
 
c
o
u
l
d
 
a
r
i
s
e
36
36
C
o
n
f
i
g
u
r
a
t
i
o
n
 
C
h
a
n
g
e
s
undefined
 
J
o
i
n
t
 
c
o
n
s
e
n
s
u
s
 
i
n
 
i
n
t
e
r
m
e
d
i
a
t
e
 
p
h
a
s
e
:
 
n
e
e
d
 
m
a
j
o
r
i
t
y
 
o
f
b
o
t
h
 
o
l
d
 
a
n
d
 
n
e
w
 
c
o
n
f
i
g
u
r
a
t
i
o
n
s
 
f
o
r
 
e
l
e
c
t
i
o
n
s
,
 
c
o
m
m
i
t
m
e
n
t
Configuration change just a log entry; applied immediately
on receipt (committed or not)
Once joint consensus is committed, begin replicating log
entry for final configuration
time
C
old+new
 entry
committed
C
new
 entry
committed
C
old
C
old+new
C
new
C
old
 can make
unilateral decisions
C
new
 can make
unilateral decisions
37
37
2
-
P
h
a
s
e
 
A
p
p
r
o
a
c
h
 
v
i
a
 
J
o
i
n
t
 
C
o
n
s
e
n
s
u
s
undefined
 
Any server from either configuration can serve as leader
If leader not in C
new
, must step down once C
new
 committed
 
time
 
C
old+new
 entry
committed
 
C
new
 entry
committed
 
C
old
 
C
old+new
 
C
new
 
C
old
 can make
unilateral decisions
 
C
new
 can make
unilateral decisions
 
38
38
 
2
-
P
h
a
s
e
 
A
p
p
r
o
a
c
h
 
v
i
a
 
J
o
i
n
t
 
C
o
n
s
e
n
s
u
s
 
l
e
a
d
e
r
 
n
o
t
 
i
n
 
C
n
e
w
s
t
e
p
s
 
d
o
w
n
 
h
e
r
e
 
Viewstamped Replication:
 A new primary copy method to support
highly-available distributed systems
Oki and Liskov, PODC 1988
 
39
39
 
Strong leader
Log entries flow only from leader to other servers
Select leader from limited set so doesn’t need to “catch up”
Leader election
Randomized timers to initiate elections
Membership changes
New joint consensus approach with overlapping majorities
Cluster can operate normally during configuration changes
40
40
R
a
f
t
 
v
s
.
 
V
R
 
W
e
d
n
e
s
d
a
y
 
l
e
c
t
u
r
e
 
Byzantine Fault Tolerance
 
Replicated State Machines
with arbitrary failures
 
41
41
Slide Note
Embed
Share

State machine replication is a crucial aspect of distributed systems, aiming to ensure reliability and fault tolerance. The RAFT protocol, based on the Primary-Backup mechanism, extends high availability by incorporating leader election and view change processes. This enables servers to behave as a single, highly reliable entity despite potential failures or network partitions. Operations are executed on all replicas deterministically, ensuring consistency across the system.

  • Distributed Systems
  • State Machine Replication
  • RAFT Protocol
  • Fault Tolerance
  • Leader Election

Uploaded on Sep 07, 2024 | 2 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.

E N D

Presentation Transcript


  1. Consensus 2 Replicated State Machines, RAFT COS 418: Distributed Systems Lecture 8 Michael Freedman RAFT slides heavily based on those from Diego Ongaro and John Ousterhout

  2. Recall: Primary-Backup Mechanism: Replicate and separate servers Goal #1: Provide a highly reliable service Goal #2: Servers should behave just like a single, more reliable server 2

  3. State machine replication Any server is essentially a state machine Operations transition between states Need an op to be executed on all replicas, or none at all i.e., we need distributedall-or-nothing atomicity If op is deterministic, replicas will end in same state 3

  4. Extend PB for high availability Primary gets ops, orders into log Replicates log of ops to backup Client C Backup executes ops in same order Backup takes over if primary fails Primary P But what if network partition rather than primary failure? View server to determine primary Backup A But what if view server fails? View determined via consensus! 4

  5. Extend PB for high availability 1. C P: request <op> Client C 2. P A, B: prepare <op> P: prepared or error 3. A, B Primary P C: result exec<op> or failed 4. P 5. P A, B: commit <op> Okay (i.e., op is stable) if written to > backups Backup A B 5

  6. 2PC from primary to backups Expect success as replicas are all identical 1. C P: request <op> Client C (unlike distributed txn) 2. P A, B: prepare <op> P: prepared or error 3. A, B Primary P C: result exec<op> or failed 4. P 5. P A, B: commit <op> Okay (i.e., op is stable) if written to > backups Backup A B 6

  7. View changes on failure 1. Backups monitor primary 2. If a backup thinks primary failed, initiate View Change (leader election) Primary P Backup A B 7

  8. View changes on failure 1. Backups monitor primary 2. If a backup thinks primary failed, initiate View Change (leader election) Requires 2f + 1 nodes to handle f failures 3. Inituitive safety argument: View change requires f+1 agreement Op committed once written to f+1 nodes At least one node both saw write and in new view Backup A Primary P 4. More advanced: Adding or removing nodes ( reconfiguration ) 8

  9. Basic fault-tolerant Replicated State Machine (RSM) approach 1. Consensus protocol to elect leader 2. 2PC to replicate operations from leader 3. All replicas execute ops once committed 9

  10. Why bother with a leader? Not necessary, but Decomposition: normal operation vs. leader changes Simplifies normal operation (no conflicts) More efficient than leader-less approaches Obvious place to handle non-determinism 10

  11. Raft: A Consensus Algorithm for Replicated Logs Diego Ongaro and John Ousterhout Stanford University 11

  12. Goal: Replicated Log Clients shl Consensus Module State Machine Consensus Module State Machine Consensus Module State Machine Servers Log Log Log add jmp mov shl add jmp mov shl add jmp mov shl Replicated log => replicated state machine All servers execute same commands in same order Consensus module ensures proper log replication 12

  13. Raft Overview 1. Leader election 2. Normal operation (basic log replication) 3. Safety and consistency after leader changes 4. Neutralizing old leaders 5. Client interactions 6. Reconfiguration 13

  14. Server States At any given time, each server is either: Leader: handles all client interactions, log replication Follower: completely passive Candidate: used to elect a new leader Normal operation: 1 leader, N-1 followers Follower Candidate Leader 14

  15. Liveness Validation Servers start as followers Leaders send heartbeats (empty AppendEntries RPCs) to maintain authority If electionTimeout elapses with no RPCs (100-500ms), follower assumes leader has crashed and starts new election timeout, new election timeout, start election receive votes from majority of servers start Follower Candidate Leader step down discover server with higher term discover current leader or higher term 15

  16. Terms (aka epochs) Term 1 Term 2 Term 3 Term 4 Term 5 time Elections Split Vote Normal Operation Time divided into terms Election (either failed or resulted in 1 leader) Normal operation under a single leader Each server maintains current term value Key role of terms: identify obsolete information 16

  17. Elections Start election: Increment current term, change to candidate state, vote for self Send RequestVote to all other servers, retry until either: 1. Receive votes from majority of servers: Become leader Send AppendEntries heartbeats to all other servers 2. Receive RPC from valid leader: Return to follower state 3. No-one wins election (election timeout elapses): Increment term, start new election 17

  18. Elections Safety: allow at most one winner per term Each server votes only once per term (persists on disk) Two different candidates can t get majorities in same term Voted for candidate A B can t also get majority Servers Liveness: some candidate must eventually win Each choose election timeouts randomly in [T, 2T] One usually initiates and wins election before others start Works well if T >> network RTT 18

  19. Log Structure log index 1 2 1 3 1 ret 4 2 5 3 6 3 7 3 8 3 term 1 leader add cmp mov jmp div shl sub command 1 1 1 2 3 add cmp ret mov jmp 1 1 1 2 3 3 3 3 add cmp ret mov jmp div shl sub followers 1 1 add cmp 1 1 1 2 3 3 3 add cmp ret mov jmp div shl committed entries Log entry = < index, term, command > Log stored on stable storage (disk); survives crashes Entry committed if known to be stored on majority of servers Durable / stable, will eventually be executed by state machines 19

  20. Normal operation shl Consensus Module State Machine Consensus Module State Machine Consensus Module State Machine Log Log Log add jmp mov shl add jmp mov shl add jmp mov shl Client sends command to leader Leader appends command to its log Leader sends AppendEntries RPCs to followers Once new entry committed: Leader passes command to its state machine, sends result to client Leader piggybacks commitment to followers in later AppendEntries Followers pass committed commands to their state machines 20

  21. Normal operation shl Consensus Module State Machine Consensus Module State Machine Consensus Module State Machine Log Log Log add jmp mov shl add jmp mov shl add jmp mov shl Crashed / slow followers? Leader retries RPCs until they succeed Performance is optimal in common case: One successful RPC to any majority of servers 21

  22. Log Operation: Highly Coherent 1 2 1 3 1 ret 4 2 5 3 6 3 1 server1 add cmp mov jmp div 1 1 1 2 3 4 server2 add cmp ret mov jmp sub If log entries on different server have same index and term: Store the same command Logs are identical in all preceding entries If given entry is committed, all preceding also committed 22

  23. Log Operation: Consistency Check 1 2 3 4 5 1 1 1 2 3 leader AppendEntries succeeds: matching entry add cmp ret mov jmp 1 1 1 2 follower add cmp ret mov 1 1 1 2 3 leader add cmp ret mov jmp AppendEntries fails: mismatch 1 1 1 1 follower add cmp ret shl AppendEntries has <index,term> of entry preceding new ones Follower must contain matching entry; otherwise it rejects Implements an induction step, ensures coherency 23

  24. Leader Changes New leader s log is truth, no special steps, start normal operation Will eventually make follower s logs identical to leader s Old leader may have left entries partially replicated Multiple crashes can leave many extraneous log entries 1 2 3 4 5 6 7 log index s1 1 1 5 6 6 6 term s2 1 1 5 6 7 7 7 s3 1 1 5 5 s4 1 1 2 4 s5 1 1 2 2 3 3 3 24

  25. Safety Requirement Once log entry applied to a state machine, no other state machine must apply a different value for that log entry Raft safety property: If leader has decided log entry is committed, entry will be present in logs of all future leaders Why does this guarantee higher-level goal? 1. Leaders never overwrite entries in their logs 2. Only entries in leader s log can be committed 3. Entries must be committed before applying to state machine Committed Present in future leaders logs Restrictions on leader election Restrictions on commitment 25

  26. Picking the Best Leader 1 2 3 4 5 Committed? s1 1 1 1 2 2 Can t tell which entries committed! s2 1 1 1 2 Unavailable during leader transition 1 1 1 2 2 Elect candidate most likely to contain all committed entries In RequestVote, candidates incl. index + term of last log entry Voter V denies vote if its log is more complete : (newer term) or (entry in higher index of same term) Leader will have most complete log among electing majority 26

  27. Committing Entry from Current Term 1 2 3 4 5 Leader for term 2 s1 1 1 2 2 2 s2 1 1 2 2 AppendEntries just succeeded s3 1 1 2 2 s4 1 1 2 Can t be elected as leader for term 3 s5 1 1 Case #1: Leader decides entry in current term is committed Safe: leader for term 3 must contain entry 4 27

  28. Committing Entry from Earlier Term 1 2 3 4 5 Leader for term 4 s1 1 1 2 4 s2 1 1 2 AppendEntries just succeeded s3 1 1 2 s4 1 1 s5 1 1 3 3 3 Case #2: Leader trying to finish committing entry from earlier Entry 3 not safely committed: s5 can be elected as leader for term 5 If elected, it will overwrite entry 3 on s1, s2, and s3 28

  29. New Commitment Rules 1 2 3 4 5 Leader for term 4 s1 1 1 2 4 s2 1 1 2 4 s3 1 1 2 4 s4 1 1 s5 1 1 3 3 3 For leader to decide entry is committed: 1. Entry stored on a majority 2. 1 new entry from leader s term also on majority Example; Once e4 committed, s5 cannot be elected leader for term 5, and e3 and e4 both safe 29

  30. Challenge: Log Inconsistencies 1 2 3 4 5 6 7 8 9 10 11 12 Leader for term 8 1 1 1 4 4 5 5 6 6 6 (a) 1 1 1 4 4 5 5 6 6 Missing Entries (b) 1 1 1 4 (c) 1 1 1 4 4 5 5 6 6 6 6 Possible followers (d) 1 1 1 4 4 5 5 6 6 6 7 7 Extraneous Entries (e) 1 1 1 4 4 4 4 (f) 1 1 1 2 2 2 3 3 3 3 3 Leader changes can result in log inconsistencies 30

  31. Repairing Follower Logs nextIndex 1 2 3 4 5 6 7 8 9 10 11 12 Leader for term 7 1 1 1 4 4 5 5 6 6 6 (a) 1 1 1 4 Followers (b) 1 1 1 2 2 2 3 3 3 3 3 New leader must make follower logs consistent with its own Delete extraneous entries Fill in missing entries Leader keeps nextIndex for each follower: Index of next log entry to send to that follower Initialized to (1 + leader s last index) If AppendEntries consistency check fails, decrement nextIndex, try again

  32. Repairing Follower Logs nextIndex 1 2 3 4 5 6 7 8 9 10 11 12 Leader for term 7 1 1 1 4 4 5 5 6 6 6 (a) 1 1 1 4 Before repair (f) 1 1 1 2 2 2 3 3 3 3 3 (f) After repair 1 1 1 4

  33. Neutralizing Old Leaders Leader temporarily disconnected other servers elect new leader old leader reconnected old leader attempts to commit log entries Terms used to detect stale leaders (and candidates) Every RPC contains term of sender Sender s term < receiver: Receiver: Rejects RPC (via ACK which sender processes ) Receiver s term < sender: Receiver reverts to follower, updates term, processes RPC Election updates terms of majority of servers Deposed server cannot commit new log entries 33

  34. Client Protocol Send commands to leader If leader unknown, contact any server, which redirects client to leader Leader only responds after command logged, committed, and executed by leader If request times out (e.g., leader crashes): Client reissues command to new leader (after possible redirect) Ensure exactly-once semantics even with leader failures E.g., Leader can execute command then crash before responding Client should embed unique ID in each command This client ID included in log entry Before accepting request, leader checks log for entry with same id 34

  35. Reconfiguration 35

  36. Configuration Changes View configuration: { leader, { members }, settings } Consensus must support changes to configuration Replace failed machine Change degree of replication Cannot switch directly from one config to another: conflicting majorities could arise Cold Server 1 Server 2 Server 3 Server 4 Server 5 Cnew Majority of Cold Majority of Cnew time 36

  37. 2-Phase Approach via Joint Consensus Joint consensus in intermediate phase: need majority of both old and new configurations for elections, commitment Configuration change just a log entry; applied immediately on receipt (committed or not) Once joint consensus is committed, begin replicating log entry for final configuration Cold can make unilateral decisions Cnew can make unilateral decisions Cnew Cold+new Cold Cold+new entry committed Cnew entry committed time 37

  38. 2-Phase Approach via Joint Consensus Any server from either configuration can serve as leader If leader not in Cnew, must step down once Cnew committed Cold can make unilateral decisions Cnew can make unilateral decisions Cnew Cold+new leader not in Cnew steps down here Cold Cold+new entry committed Cnew entry committed time 38

  39. Viewstamped Replication: A new primary copy method to support highly-available distributed systems Oki and Liskov, PODC 1988 39

  40. Raft vs. VR Strong leader Log entries flow only from leader to other servers Select leader from limited set so doesn t need to catch up Leader election Randomized timers to initiate elections Membership changes New joint consensus approach with overlapping majorities Cluster can operate normally during configuration changes 40

  41. Wednesday lecture Byzantine Fault Tolerance Replicated State Machines with arbitrary failures 41

More Related Content

giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#