An Introduction to Consensus with Raft: Overview and Importance

Slide Note
Embed
Share

This document provides an insightful introduction to consensus with the Raft algorithm, explaining its key concepts, including distributed system availability versus consistency, the importance of eliminating single points of failure, the need for consensus in building consistent storage systems, and the goal of replicated log systems. Through visuals and explanations, it clarifies the significance of achieving agreement on shared states, recovering from server failures, and ensuring proper log replication in replicated state machines. Additionally, it contrasts Raft with the Paxos protocol, highlighting the challenges and gaps in understanding consensus algorithms like Paxos.


Uploaded on Sep 28, 2024 | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. Download presentation by click this link. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

E N D

Presentation Transcript


  1. An Introduction to Consensus with Raft Diego Ongaro John Ousterhout Stanford University http://raftconsensus.github.io

  2. April 2014 Raft Consensus Algorithm Slide 2

  3. Distributed Systems availability or consistency April 2014 Raft Consensus Algorithm Slide 3

  4. Inside a Consistent System TODO: eliminate single point of failure An ad hoc algorithm This case is rare and typically occurs as a result of a network partition with replication lag. Watch out for @aphyr OR A consensus algorithm (built-in or library) Paxos, Raft, A consensus service ZooKeeper, etcd, consul, April 2014 Raft Consensus Algorithm Slide 4

  5. What is Consensus? Agreement on shared state (single system image) Recovers from server failures autonomously Minority of servers fail: no problem Majority fail: lose availability, retain consistency Servers April 2014 Raft Consensus Algorithm Slide 5

  6. Why Is Consensus Needed? Key to building consistent storage systems Top-level system configuration Which server is my SQL master? What shards exist in my storage system? Which servers store shard X? Sometimes used to replicate entire database state (e.g., Megastore, Spanner) April 2014 Raft Consensus Algorithm Slide 6

  7. Goal: Replicated Log Clients z 6 Consensus Module State Machine x 1 Consensus Module State Machine x 1 Consensus Module State Machine x 1 y 2 y 2 y 2 Servers z 6 z 6 z 6 Log Log Log x 3 y 2 x 1 z 6 x 3 y 2 x 1 z 6 x 3 y 2 x 1 z 6 Replicated log All servers execute same commands in same order replicated state machine Consensus module ensures proper log replication System makes progress as long as any majority of servers are up Failure model: fail-stop (not Byzantine), delayed/lost messages April 2014 Raft Consensus Algorithm Slide 7

  8. Paxos Protocol Leslie Lamport, 1989 Nearly synonymous with consensus Hard to understand The dirty little secret of the NSDI community is that at most five people really, truly understand every part of Paxos ;-). Anonymous NSDI reviewer Bad foundation for building systems There are significant gaps between the description of the Paxos algorithm and the needs of a real-world system the final system will be based on an unproven protocol. Chubby authors April 2014 Raft Consensus Algorithm Slide 8

  9. Rafts Design for Understandability We wanted the best algorithm for building real systems Must be correct, complete, and perform well Must also be understandable What would be easier to understand or explain? Fundamentally different decomposition than Paxos Less complexity in state space Less mechanism April 2014 Raft Consensus Algorithm Slide 9

  10. User study Quiz Grades Survey Results April 2014 Raft Consensus Algorithm Slide 10

  11. Raft Overview 1. Leader election Select one of the servers to act as leader Detect crashes, choose new leader 2. Log replication (normal operation) Leader takes commands from clients, appends them to its log Leader replicates its log to other servers (overwriting inconsistencies) 3. Safety Only elect leaders with all committed entries in their logs April 2014 Raft Consensus Algorithm Slide 11

  12. Server States At any given time, each server is either: Follower: completely passive replica (issues no RPCs, responds to incoming RPCs) Candidate: used to elect a new leader Leader: handles all client interactions, log replication At most one viable leader at a time time out, start election receive votes from majority of servers start Follower Candidate Leader April 2014 Raft Consensus Algorithm Slide 12

  13. Terms Term 1 Term 2 Term 3 Term 4 Term 5 time Elections Split Vote Normal Operation Time divided into terms: Election Normal operation under a single leader At most one leader per term Each server maintains current term value Key role of terms: identify obsolete information April 2014 Raft Consensus Algorithm Slide 13

  14. Leader Election Leaders send heartbeats to maintain authority. Upon election timeout, start new election: Increment current term Change to Candidate state Vote for self Send Request Vote RPCs to all other servers, wait until either: 1. Receive votes from majority of servers: Become leader, send heartbeats to all other servers 2. Receive RPC from valid leader: Return to follower state 3. No-one wins election (election timeout elapses): Increment term, start new election Slide 14

  15. Leader Election Visualization The Secret Lives of Data http://thesecretlivesofdata.com Visualizes distributed algorithms, starting with Raft Project by Ben Johnson (author of go-raft) April 2014 Raft Consensus Algorithm Slide 15

  16. Randomized Timeouts If we choose election timeouts randomly, One server usually times out and wins election before others wake up Slide 16

  17. Raft Paper Log replication Client interaction Cluster membership changes Log compaction To appear: 2014 USENIX Annual Technical Conf. June 19-20 in Philadelphia Draft on Raft website April 2014 Raft Consensus Algorithm Slide 17

  18. Raft Implementations kanaka/raft.js go-raft hashicorp/raft LogCabin ckite peterbourgon/raft rafter barge py-raft ocaml-raft JS Go Go C++ Scala Go Erlang Java Python OCaml Joel Martin Ben Johnson (Sky) and Xiang Li (CoreOS) Armon Dadgar (HashiCorp) Diego Ongaro (Stanford) Pablo Medina Peter Bourgon Andrew Stone (Basho) Dave Rusek Toby Burress Heidi Howard (Cambridge) April 2014 Raft Consensus Algorithm Slide 18

  19. Best Logo: go-raft by Brandon Philips (CoreOS) April 2014 Raft Consensus Algorithm Slide 19

  20. Summary Consensus is key to building consistent systems Design for understandability Raft separates leader election from log replication Leader election uses voting and randomized timeouts More at http://raftconsensus.github.io: Paper draft, other talks 10 to 50+ implementations raft-dev mailing list Diego Ongaro @ongardie Slide 20

Related


More Related Content