Distributed System Synchronization and Logical Clocks

 
Distributed Systems
CS 15-440
 
Synchronization – Part II
Lecture 12, October 15, 2018
 
Mohammad Hammoud
 
Today
 
Last Session:
Synchronization: UTC, tracking time on a computer, physical clock synchronization
 
Today
’s Session:
Logical Clock Synchronization
Lamport’s
 and Vector Clocks
Introduction to Distributed Mutual Exclusion
 
Announcements:
Midterm exam is on Thursday, Oct 18
th
 (it is open book, open notes)
PS3 is due on Monday, Oct 22 by midnight
Project II is out. It is due on Nov 1
st
 by midnight
Continuing Synchronization
Time Synchronization
Physical Clock Synchronization (or, simply, Clock Synchronization)
Here, actual time on the computers are synchronized
Logical Clock Synchronization
Computers are synchronized based on the relative ordering of events
Mutual Exclusion
How to coordinate between processes that access the same resource?
Election Algorithms
Here, a group of entities elect one entity as the coordinator for solving a
problem
Previous lecture
Next lecture
Today
s lecture
 
Overview
 
Time Synchronization
Clock Synchronization
Logical Clock Synchronization
 
Mutual Exclusion
 
Election Algorithms
Why Logical Clocks?
 
Lamport (in 1978) showed that:
Clock synchronization is not necessary in all scenarios
If two processes do not interact, it is not necessary that their clocks
are synchronized
 
Many times, it is sufficient if processes agree on the 
order
 in which
the events have occurred in a DS
For example, for a distributed 
make
 utility, it is sufficient to know if
an input file was modified 
before
 or 
after
 its object file
 
Logical Clocks
 
Logical clocks are used to define an order of events without
measuring the physical time at which the events occurred
 
We will study two types of logical clocks
1.
Lamport’s Logical Clock (or simply, Lamport’s Clock)
2.
Vector Clock
 
Logical Clocks
 
We will study two types of logical clocks
1.
Lamport’s Clock
2.
Vector Clock
Lamport’s Logical Clock
 
Lamport advocated maintaining 
logical clocks at the processes
 to
keep track of the order of events
 
To synchronize logical clocks, Lamport defined a relation called
happened-before
 
The expression 
a
b
 (reads as “
a
 
happened before 
b
) means that
all
 entities in a DS agree that event 
a
 occurred before event 
b
The Happened-before Relation
 
The 
happened-before
 relation can be observed directly in two
situations:
1.
If 
a
 and 
b
 are events in the same process, and 
a
 occurs before 
b
, then
a
b
 is true
 
2.
If 
a
 is an event of message 
m
 being sent by a process, and 
b
 is the event of
m 
(i.e., the same message) 
being received by another process, then 
a
b
is true
 
The 
happened-before
 relation is 
transitive
If
 
a
b
 and 
b
c
, then 
a
c
 
Time values in Logical Clocks
 
For every event 
a
, assign a logical 
time value 
C(a)
 on
which all processes agree (
C
 still corresponds to the process
and not to the event, but gets updated when the event
happens
)
 
Time value for events have the property that:
If 
a
b
, then 
C(a)< C(b)
Properties of Logical Clock
 
From the 
happened-before
 relation, we can infer that:
If two events 
a
 and 
b
 occur within the same process and 
a
b
, then 
C(a)
 and
C(b)
 are assigned time values such that 
C(a) < C(b)
 
If 
a
 is the event of sending message 
m
 from one process (say P1), and 
b
 is the
event of receiving 
m 
(i.e., the same message) at another process (say, P2)
, then:
The time values 
C
1
(a)
 and 
C
2
(b) 
are assigned in a way such that the two
processes agree that 
C
1
(a) < C
2
(b)
 
The clock time 
C
 must always go forward (increasing), and never backward
(decreasing)
Synchronizing Logical Clocks
 
Three processes 
P1
, 
P2
 and 
P3
running at different rates
 
If the processes communicate
between each other, there might
be discrepancies in agreeing on
the event ordering
Ordering of sending and receiving
messages 
m1
 and 
m2
 are correct
 
However, 
m3
 and 
m4
 violate the
happened-before relationship
0
6
12
18
24
30
36
42
48
54
60
0
8
16
24
32
40
48
56
64
72
80
0
10
20
30
40
50
60
70
80
90
100
P1
P2
P3
 
m1
 
m2
 
m3
 
m4
Lamport’s Clock Algorithm
 
When a message is being sent:
Each message carries a 
timestamp
according to the sender’s logical clock
 
When a message is received:
If the receiver logical clock is less than
the message sending time in the packet,
then adjust the receiver’s clock such that:
   
currentTime = timestamp + 1
 
0
6
12
18
24
30
36
42
48
54
60
0
8
16
24
32
40
48
56
64
72
80
0
10
20
30
40
50
60
70
80
90
100
P1
P2
P3
m3:60
 
6
1
m4:69
54
69
77
85
70
76
Logical Clock Without a Physical Clock
 
Previous examples assumed that there is a physical clock at
each computer (probably running at different rates)
 
How to attach a time value to an event when there is no
global clock?
Implementation of Lamport’s Clock
 
Each process 
P
i
 maintains a local counter 
C
i
 and adjusts this counter according
to the following rules:
 
1.
For any two successive events that take place within 
P
i
, 
C
i
 is incremented by 
1
2.
Each time a message 
m
 is sent by process 
P
i
 , 
m
 is assigned a timestamp 
ts(m) = C
i
3.
Whenever a message 
m
 is received by a process 
P
j
, 
P
j
 adjusts its local counter 
C
j
 to
max(C
j
, ts(m)) + 1
 
P
0
 
P
1
 
P
2
 
C
0
=1
 
C
0
=2
 
C
0
=0
 
C
1
=0
 
C
2
=0
m
:
2
 
C
1
=3
Placement of Logical Clock
In a computer, several processes can use different logical clocks
However, instead of each process maintaining its own logical clock, a single
logical clock can be implemented in the middleware as a time service
Network layer
Middleware layer
Application layer
Adjust local clock and
timestamp message
Adjust local clock
Application sends a
message
Middleware sends a
message
Message is received
Message is delivered to
the application
Limitation of Lamport’s Clock
 
Lamport’s clock ensures that if 
a
b
, then 
C(a) < C(b)
 
However, it does not say anything about any two 
arbitrary
 (
concurrent
 or 
independent
)
events 
a
 and 
b
 by only comparing their time values
For any two arbitrary events 
a
 and 
b
, 
C(a) < C(b)
 does not mean that 
a
b
 
Example:
0
6
12
18
24
30
36
42
48
54
60
0
8
16
24
32
40
48
56
64
72
80
0
10
20
30
40
50
60
70
80
90
100
P1
P2
P3
61
m1:6
m2:20
m3:32
Compare m1 and m3
P2 can infer that  
m1
m3
Compare m1 and m2
P2 
cannot
 infer that  
m1
m2
 or 
m2
m1
Summary of Lamport’s Clock
 
Lamport suggested using logical clocks
Processes synchronize based on the time values of their logical clocks rather than the
absolute time values of their physical clocks
 
Which applications in DS need logical clocks?
Applications with provable ordering of events
Perfect physical clock synchronization is hard to achieve in practice
Applications with rare events
Events are rarely generated, and physical clock synchronization overhead is not justified
 
However, Lamport’s Clock cannot guarantee perfect ordering of events by just
observing the time values of two 
arbitrary
 events
 
Logical Clocks
 
We will study two types of logical clocks
1.
Lamport’s Clock
2.
Vector Clock
Vector Clocks
 
Vector clock was proposed to overcome the limitation of Lamport
s clock
The property of 
inferring
 that 
a
 occurred before 
b
 is known as the 
causality property
 
A vector clock for a system of 
N
 processes is an array of 
N
 integers
 
Every process 
P
i
 stores its own vector clock 
VC
i
Lamport
s time values for events are stored in 
VC
i
VC
i
(a)
 
is assigned to an event 
a
 
If 
VC
i
(a) < VC
i
(b)
,
 
then we can infer that 
a
b 
(or more precisely, that event 
a
causally
 preceded event 
b
)
Updating Vector Clocks
 
Vector clocks are constructed as follows:
1.
 
VC
i
[i]
 
is the number of events that have occurred at process 
P
i
 
so far
VC
i
[i]
 
is the local logical clock at process 
P
i
 
 
2.
If 
VC
i
[j]= k
, then 
P
i
 knows that 
k
 events have occurred at 
P
j
VC
i
[j]
 
is 
 
P
i
’s knowledge of the local time at 
 
P
j
Increment 
VC
i
 whenever a new event occurs
Pass 
VC
j
 along with the message
Vector Clock Update Algorithm
 
Whenever there is a new event at 
P
i
, increment 
VC
i
[i]
When a process 
P
i
 sends a message 
m
 to 
P
j
:
Increment  
VC
i
[i]
Set  
m
’s timestamp  
ts(m)
 to the vector 
VC
i
When message 
m
 is received process 
P
j
 :
VC
j
[k] = max(VC
j
[k], ts(m)[k]) 
; 
(for all 
k
)
Increment  
VC
j
[j]
 
 
P
0
 
P
1
 
P
2
 
VC
0
=(1,0,0)
 
VC
0
=(2,0,0)
 
VC
0
=(0,0,0)
 
VC
1
=(0,0,0)
 
VC
2
=(0,0,0)
m
:(
2,0,0)
 
VC
1
=(2,1,0)
Inferring Events with Vector Clocks
 
Let a process 
P
i
 send a message 
m
 to 
P
j
 with timestamp 
ts(m)
,
then:
P
j
 
knows the number of events at the sender 
P
i
 that causally precede 
m
(
ts(m)[i] – 1)
 
denotes the number of events at 
P
i
P
j
 
also knows the minimum number of events at other processes 
P
k
 that
causally precede 
m
(
ts(m)[k] – 1)
 
denotes the minimum number of events at 
P
k
P
0
P
1
P
2
VC
0
=(1,0,0)
 
VC
0
=(2,0,0)
VC
0
=(0,0,0)
VC
1
=(0,0,0)
VC
2
=(0,0,0)
m
:(
2,0,0)
 
VC
1
=(2,2,0)
 
VC
2
=(2,3,1)
m
:(
2,3,0)
 
VC
1
=(2,3,0)
VC
1
=(0,1,0)
Enforcing Causal Communication
 
Assume that messages are 
multicast
 within a group of processes, P
0
, P
1
 and P
2
 
To enforce 
causally-ordered multicasting
, the delivery of a message 
m
 sent from P
i
 to P
j
can be delayed until the following two conditions are met:
ts(m)[i] = VC
j
[i] + 1 (
Condition I
)
ts(m)[k] <= VC
j
[k] for all k != i (
Condition II
)
 
 
Assuming that P
i
 only increments VC
i
[i] upon sending m and adjusts VC
i
[k] to max{VC
i
[k],
ts(m)[k]} for each k upon receiving a message m
 
P
0
 
P
1
 
P
2
 
VC
0
=(1,0,0)
 
VC
0
=(1,1,0)
 
VC
0
=(0,0,0)
 
VC
2
=(0,0,0)
m
:(1
,0,0)
 
VC
1
=(1,1,0)
 
VC
2
=(1,0,0)
 
VC
1
=(1,0,0)
 
VC
1
=(0,0,0)
m
:(1
,1,0)
 
VC
2
=(1,1,0)
 
Condition II does not hold 
Delay delivery
 
VC
2
=(1,1,0)
Summary – Logical Clocks
 
Logical clocks are employed when processes have to agree on
relative ordering of events, but not necessarily actual time of events
 
Two types of logical clocks:
Lamport’s Logical Clocks
Supports relative ordering of events across different processes by using the
happened-before
 relationship
 
Vector Clocks
Supports 
causal
 ordering of events
 
Overview
 
Time Synchronization
Clock Synchronization
Logical Clock Synchronization
 
Mutual Exclusion
 
Election Algorithms
Need for Mutual Exclusion
Distributed processes need to coordinate to access shared resources
Example: Writing a file in a Distributed File System
27
Distributed
File
abc.txt
Read from file abc.txt
Write to file abc.txt
Write to file abc.txt
In uniprocessor systems, mutual exclusion to a shared resource is provided through shared
variables or operating system support
In distributed systems, processes coordinate accesses to a shared resource by passing messages
to enforce 
distributed mutual exclusion
P1
P2
P3
However, such support is insufficient to enable mutual exclusion of distributed entities
Types of Distributed Mutual Exclusion
Mutual exclusion algorithms are classified into two categories
28
Resource
P1
C1
 
1.
Permission-based Approaches
A process, which wants to access a shared
resource, requests the permission from one or
more coordinators
 
2.
Token-based Approaches
Each shared resource has a token
Token is circulated among all the processes
A process can access the resource if it has the
token
Request to
access
Grant
Access
Resource
P1
P2
P3
Token
Access
Token
Access
Token
Access
 
Overview
 
Time Synchronization
Clock Synchronization
Logical Clock Synchronization
 
Mutual Exclusion
Permission-based Approaches
Token-based Approaches
 
Election Algorithms
Permission-based Approaches
 
There are two types of permission-based mutual exclusion algorithms
1.
Centralized Algorithms
2.
Decentralized Algorithms
 
Let us study an example of each type of algorithms
 
30
A Centralized Algorithm
 
One process is 
elected
 as a coordinator (
C
)  for a shared resource
 
Coordinator maintains a 
Queue
 of access requests
 
Whenever a process wants to access the resource, it sends a
request message to the coordinator to access the resource
 
When the coordinator receives the request:
If no other process is currently accessing the resource, it grants the
permission to the process by sending a “grant” message
If another process is accessing the resource, the coordinator queues the
request, and does not reply to the request
 
The process in action releases the exclusive access after accessing
the resource
 
Afterwards, the coordinator sends the “grant” message to the
next process in the queue
31
Resource
 
Queue
C
P0
P2
P1
Req
Grant
Access
Req
P2
Rel
P1
Grant
Access
Discussion
 
(
+
) Flexibility: Blocking versus non-blocking requests
The coordinator can 
block
 the requesting process until the resource is free
Or, the coordinator can send a “permission-denied” message back to the process
The process can poll the coordinator at a later time
Or, the coordinator queues the request (without blocking the requestor). Once the resource is released,
the coordinator will send an explicit “grant” message to the process
 
(
+
) Simplicity: The algorithm guarantees mutual exclusion, and is simple to implement
 
(
-
) Fault-Tolerance Deficiency
Centralized algorithm is vulnerable to a single-point of failure (at coordinator)
Processes cannot distinguish between dead coordinator and request blocking
 
(
-
) Performance Bottleneck
In a large-scale system, single coordinator can be overwhelmed with requests
32
 
Next Class
 
Mutual Exclusion
How to coordinate between processes that access the same resource?
 
Election Algorithms
Here, a group of entities elect one entity as the coordinator for solving a problem
Slide Note
Embed
Share

Continuing from the previous lecture on time synchronization, this session delved into logical clock synchronization, mutual exclusion, and election algorithms in distributed systems. Logical clocks, such as Lamport's Clock and Vector Clock, play a crucial role in defining the order of events without the need for physical time synchronization. The session also covered the significance of logical clocks in scenarios where clock synchronization is not necessary.

  • Distributed Systems
  • Synchronization
  • Logical Clocks
  • Mutual Exclusion
  • Election Algorithms

Uploaded on Oct 03, 2024 | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. Download presentation by click this link. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

E N D

Presentation Transcript


  1. Distributed Systems CS 15-440 Synchronization Part II Lecture 12, October 15, 2018 Mohammad Hammoud

  2. Today Last Session: Synchronization: UTC, tracking time on a computer, physical clock synchronization Today s Session: Logical Clock Synchronization Lamport s and Vector Clocks Introduction to Distributed Mutual Exclusion Announcements: Midterm exam is on Thursday, Oct 18th (it is open book, open notes) PS3 is due on Monday, Oct 22 by midnight Project II is out. It is due on Nov 1st by midnight

  3. Continuing Synchronization Previous lecture Time Synchronization Physical Clock Synchronization (or, simply, Clock Synchronization) Here, actual time on the computers are synchronized Logical Clock Synchronization Computers are synchronized based on the relative ordering of events Today s lecture Mutual Exclusion How to coordinate between processes that access the same resource? Election Algorithms Here, a group of entities elect one entity as the coordinator for solving a problem Next lecture

  4. Overview Time Synchronization Clock Synchronization Logical Clock Synchronization Mutual Exclusion Election Algorithms

  5. Why Logical Clocks? Lamport (in 1978) showed that: Clock synchronization is not necessary in all scenarios If two processes do not interact, it is not necessary that their clocks are synchronized Many times, it is sufficient if processes agree on the order in which the events have occurred in a DS For example, for a distributed make utility, it is sufficient to know if an input file was modified before or after its object file

  6. Logical Clocks Logical clocks are used to define an order of events without measuring the physical time at which the events occurred We will study two types of logical clocks 1. Lamport s Logical Clock (or simply, Lamport s Clock) 2. Vector Clock

  7. Logical Clocks We will study two types of logical clocks 1. Lamport s Clock 2. Vector Clock

  8. Lamports Logical Clock Lamport advocated maintaining logical clocks at the processes to keep track of the order of events To synchronize logical clocks, Lamport defined a relation called happened-before The expression a all entities in a DS agree that event a occurred before event b b(reads as ahappened before b ) means that

  9. The Happened-before Relation The happened-before relation can be observed directly in two situations: 1. If a and b are events in the same process, and a occurs before b, then a b is true 2. If a is an event of message m being sent by a process, and b is the event of m (i.e., the same message) being received by another process, then a is true b The happened-before relation is transitive Ifa b and b c, then a c

  10. Time values in Logical Clocks For every event a, assign a logical time value C(a) on which all processes agree (C still corresponds to the process and not to the event, but gets updated when the event happens) Time value for events have the property that: If a b, then C(a)< C(b)

  11. Properties of Logical Clock From the happened-before relation, we can infer that: If two events a and b occur within the same process and a C(b) are assigned time values such that C(a) < C(b) b, then C(a) and If a is the event of sending message m from one process (say P1), and b is the event of receiving m (i.e., the same message) at another process (say, P2), then: The time values C1(a) and C2(b) are assigned in a way such that the two processes agree that C1(a) < C2(b) The clock time C must always go forward (increasing), and never backward (decreasing)

  12. Synchronizing Logical Clocks Three processes P1, P2 and P3 running at different rates P1 P2 P3 0 0 0 If the processes communicate between each other, there might be discrepancies in agreeing on the event ordering Ordering of sending and receiving messages m1 and m2 are correct m1 6 8 10 12 16 20 m2 18 24 30 24 32 40 30 40 50 m3 36 48 60 42 56 70 m4 48 64 80 However, m3 and m4 violate the happened-before relationship 54 72 90 60 80 100

  13. Lamports Clock Algorithm When a message is being sent: Each message carries a timestamp according to the sender s logical clock P1 P2 P3 0 0 0 6 8 10 When a message is received: If the receiver logical clock is less than the message sending time in the packet, then adjust the receiver s clock such that: currentTime = timestamp + 1 12 16 20 18 24 30 24 32 40 30 40 50 m3:60 36 48 60 42 56 61 70 m4:69 48 64 69 80 54 54 70 72 77 90 60 76 80 85 100

  14. Logical Clock Without a Physical Clock Previous examples assumed that there is a physical clock at each computer (probably running at different rates) How to attach a time value to an event when there is no global clock?

  15. Implementation of Lamports Clock Each process Pi maintains a local counter Ci and adjusts this counter according to the following rules: For any two successive events that take place within Pi, Ci is incremented by 1 Each time a message m is sent by process Pi , m is assigned a timestamp ts(m) = Ci Whenever a message m is received by a process Pj, Pj adjusts its local counter Cj to max(Cj, ts(m)) + 1 1. 2. 3. C0=0 C0=1 C0=2 P0 m:2 C1=0 C1=3 P1 C2=0 P2

  16. Placement of Logical Clock In a computer, several processes can use different logical clocks However, instead of each process maintaining its own logical clock, a single logical clock can be implemented in the middleware as a time service Application sends a message Message is delivered to the application Application layer Adjust local clock and timestamp message Adjust local clock Middleware layer Middleware sends a message Message is received Network layer

  17. Limitation of Lamports Clock Lamport s clock ensures that if a b, then C(a) < C(b) However, it does not say anything about any two arbitrary (concurrent or independent) events a and b by only comparing their time values For any two arbitrary events a and b, C(a) < C(b) does not mean that a b Example: P1 P2 P3 0 6 0 8 0 Compare m1 and m3 P2 can infer that m1 m1:6 10 20 30 40 50 60 70 80 90 100 m2:20 m3 12 18 24 30 36 42 48 54 60 16 24 32 40 48 56 64 72 80 m3:32 Compare m1 and m2 P2 cannot infer that m1 m2 or m2 m1 61

  18. Summary of Lamports Clock Lamport suggested using logical clocks Processes synchronize based on the time values of their logical clocks rather than the absolute time values of their physical clocks Which applications in DS need logical clocks? Applications with provable ordering of events Perfect physical clock synchronization is hard to achieve in practice Applications with rare events Events are rarely generated, and physical clock synchronization overhead is not justified However, Lamport s Clock cannot guarantee perfect ordering of events by just observing the time values of two arbitrary events

  19. Logical Clocks We will study two types of logical clocks 1. Lamport s Clock 2. Vector Clock

  20. Vector Clocks Vector clock was proposed to overcome the limitation of Lamport s clock The property of inferring that a occurred before b is known as the causality property A vector clock for a system of N processes is an array of N integers Every process Pi stores its own vector clock VCi Lamport s time values for events are stored in VCi VCi(a)is assigned to an event a If VCi(a) < VCi(b), then we can infer that a causally preceded event b) b (or more precisely, that event a

  21. Updating Vector Clocks Vector clocks are constructed as follows: 1. VCi[i]is the number of events that have occurred at process Piso far VCi[i]is the local logical clock at process Pi Increment VCi whenever a new event occurs 2.If VCi[j]= k, then Pi knows that k events have occurred at Pj VCi[j]is Pi s knowledge of the local time at Pj Pass VCj along with the message

  22. Vector Clock Update Algorithm Whenever there is a new event at Pi, increment VCi[i] When a process Pi sends a message m to Pj: Increment VCi[i] Set m s timestamp ts(m) to the vector VCi When message m is received process Pj : VCj[k] = max(VCj[k], ts(m)[k]) ; (for all k) Increment VCj[j] VC0=(0,0,0) VC0=(1,0,0) VC0=(2,0,0) P0 m:(2,0,0) VC1=(0,0,0) VC1=(2,1,0) P1 VC2=(0,0,0) P2

  23. Inferring Events with Vector Clocks Let a process Pi send a message m to Pj with timestamp ts(m), then: Pjknows the number of events at the sender Pi that causally precede m (ts(m)[i] 1)denotes the number of events at Pi Pjalso knows the minimum number of events at other processes Pk that causally precede m (ts(m)[k] 1)denotes the minimum number of events at Pk VC0=(0,0,0) VC0=(1,0,0) VC0=(2,0,0) P0 m:(2,0,0) VC1=(0,0,0) VC1=(0,1,0) VC1=(2,2,0) VC1=(2,3,0) P1 m :(2,3,0) VC2=(2,3,1) VC2=(0,0,0) P2

  24. Enforcing Causal Communication Assume that messages are multicast within a group of processes, P0, P1 and P2 To enforce causally-ordered multicasting, the delivery of a message m sent from Pi to Pj can be delayed until the following two conditions are met: ts(m)[i] = VCj[i] + 1 (Condition I) ts(m)[k] <= VCj[k] for all k != i (Condition II) Assuming that Pi only increments VCi[i] upon sending m and adjusts VCi[k] to max{VCi[k], ts(m)[k]} for each k upon receiving a message m VC0=(0,0,0) VC0=(1,0,0) VC0=(1,1,0) P0 m:(1,0,0) m:(1,1,0) VC1=(0,0,0) VC1=(1,0,0) VC1=(1,1,0) P1 VC2=(1,1,0) VC2=(1,0,0) VC2=(0,0,0) P2 VC2=(1,1,0) Condition II does not hold Delay delivery

  25. Summary Logical Clocks Logical clocks are employed when processes have to agree on relative ordering of events, but not necessarily actual time of events Two types of logical clocks: Lamport s Logical Clocks Supports relative ordering of events across different processes by using the happened-before relationship Vector Clocks Supports causal ordering of events

  26. Overview Time Synchronization Clock Synchronization Logical Clock Synchronization Mutual Exclusion Election Algorithms

  27. Need for Mutual Exclusion Distributed processes need to coordinate to access shared resources Example: Writing a file in a Distributed File System Client C Client A P1 Server Write to file abc.txt Read from file abc.txt P3 Distributed File abc.txt Client B P2 Write to file abc.txt In uniprocessor systems, mutual exclusion to a shared resource is provided through shared variables or operating system support However, such support is insufficient to enable mutual exclusion of distributed entities In distributed systems, processes coordinate accesses to a shared resource by passing messages to enforce distributed mutual exclusion 27

  28. Types of Distributed Mutual Exclusion Mutual exclusion algorithms are classified into two categories 1. Permission-based Approaches Request to access Coordinator C1 A process, which wants to access a shared resource, requests the permission from one or more coordinators Client 1 Grant P1 Server Access Resource 2. Token-based Approaches Server Each shared resource has a token Resource Token is circulated among all the processes Access Access Access Client 1 Client 2 Client 3 A process can access the resource if it has the token P1 P2 P3 Token Token Token 28

  29. Overview Time Synchronization Clock Synchronization Logical Clock Synchronization Mutual Exclusion Permission-based Approaches Token-based Approaches Election Algorithms

  30. Permission-based Approaches There are two types of permission-based mutual exclusion algorithms 1. Centralized Algorithms 2. Decentralized Algorithms Let us study an example of each type of algorithms 30

  31. A Centralized Algorithm One process is elected as a coordinator (C) for a shared resource Coordinator maintains a Queue of access requests Whenever a process wants to access the resource, it sends a request message to the coordinator to access the resource P0 P1 P2 When the coordinator receives the request: If no other process is currently accessing the resource, it grants the permission to the process by sending a grant message If another process is accessing the resource, the coordinator queues the request, and does not reply to the request Grant Grant Req Req Rel Access Access C Resource P2 P1 Queue The process in action releases the exclusive access after accessing the resource Afterwards, the coordinator sends the grant message to the next process in the queue 31

  32. Discussion (+) Flexibility: Blocking versus non-blocking requests The coordinator can block the requesting process until the resource is free Or, the coordinator can send a permission-denied message back to the process The process can poll the coordinator at a later time Or, the coordinator queues the request (without blocking the requestor). Once the resource is released, the coordinator will send an explicit grant message to the process (+) Simplicity: The algorithm guarantees mutual exclusion, and is simple to implement (-) Fault-Tolerance Deficiency Centralized algorithm is vulnerable to a single-point of failure (at coordinator) Processes cannot distinguish between dead coordinator and request blocking (-) Performance Bottleneck In a large-scale system, single coordinator can be overwhelmed with requests32

  33. Next Class Mutual Exclusion How to coordinate between processes that access the same resource? Election Algorithms Here, a group of entities elect one entity as the coordinator for solving a problem

More Related Content

giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#