Process Coordination and Mutual Exclusion in Concurrency

Unit No 03
Process
Coordination
Synchronization
Principle of concurrency
1.
Concurrency in multithreading: An interaction
between multiple thread running in one process
2.
Concurrency in multiprogramming: An interaction
between multiple process running on one CPU
3.
Concurrency in multiprocessor An interaction
between multiple CPU running in multiple process
4. Concurrency in multi-computer An interaction
 
between multiple computer running in
 
distributed process or thread.
Requirement for Mutual Exclusion
Only one process at a time is allowed in the critical section for
a resource
 A process that halts in its non-critical section must do so
without interfering with other processes
 No deadlock or starvation
A process must not be delayed access to a critical section
when there is no other process using it
 No assumptions are made about relative process speeds or
number of processes
 A process remains inside its critical section for a finite time
only
Mutual Exclusion: Hardware Support
Interrupt Disabling – In general: A process runs until it invokes
an operating system service or until it is interrupted
Uni-processor: Disabling interrupts guarantees mutual
exclusion
 Processor is limited in its ability to interleave programs
Multiprocessing disabling interrupts on one processor will not
guarantee mutual exclusion
Special Machine Instructions – Performed in a single
instruction cycle – Access to the memory location is blocked
for any other instructions
Cont…
Test and Set Instruction
boolean testset (int i)
{ if (i == 0)
 { i = 1; return true;
} else
{ return false; }
}
Exchange Instruction
void exchange(int register, int memory)
{
int temp;
temp = memory;
memory = register;
register = temp;
}
Critical Section Problem
 
Race Condition
The critical section is a code segment where the shared variables can
be accessed. An atomic action is required in a critical section i.e. only
one process can execute in its critical section at a time. All the other
processes have to wait to execute in their critical sections.
A diagram that demonstrates the critical section is as follows −
Cont…
Entry Section: Ii is block of code executed in preparation for entering
critical section.
Exit Section: the code executed upon leaving the critical section.
Remainder Section: Rest of the code.
Solution to the Critical Section Problem
The critical section problem needs a solution to synchronize the different
processes. The solution to the critical section problem must satisfy the
following conditions −
Mutual Exclusion 
Mutual exclusion implies that only one process can be
inside the critical section at any time. If any other processes require the
critical section, they must wait until it is free.
Progress 
Progress means that if a process is not using the critical section,
then it should not stop any other process from accessing it. In other
words, any process can enter a critical section if it is free.
Bounded Waiting 
Bounded waiting means that each process must have a
limited waiting time. Itt should not wait endlessly to access the critical
section.
Mutual Exclusion: Mutex
Mutex is a mutual exclusion object that synchronizes access to a
resource. It is created with a unique name at the start of a program.
The mutex locking mechanism ensures only one thread can acquire
the mutex and enter the critical section. This thread only releases
the mutex when it exits in the critical section.
wait (mutex);
.....
Critical Section
.....
signal (mutex);
Mutual Exclusion: Semaphores
Semaphore is simply a variable that is non-negative
and shared between threads. A semaphore is a
signaling mechanism, and another thread can signal
a thread that is waiting on a semaphore.
Cont…
A semaphore uses two atomic operations,
1. Wait:
 The wait operation decrements the value of its argument S if it is
positive. If S is negative or zero, then no operation is performed.
wait(S)
{
   while (S
<
=0);
   S--;
}
2. Signal for the process synchronization:
 The signal operation increments
the value of its argument S.
signal(S)
{
   S++;
}
Cont…
Types of Semaphore
Semaphore is distinguished by the operating system in two
categories 
Counting semaphore
 and 
Binary semaphore
.
1. Counting Semaphore:
 The semaphore S value is initialized to
the 
number of resources
 present in the system. Whenever a
process wants to access the resource, it performs 
the
wait()
operation on the semaphore and 
decrements
 the semaphore
value by one. When it releases the resource, it performs 
the
signal()
 operation on the semaphore and 
increments
 the
semaphore value by one.
When the semaphore count goes to 0, it means the processes
occupy all resources. A process needs to use a resource when the
semaphore count is 0. It executes the 
wait()
 operation and
gets 
blocked
 until the semaphore value becomes greater than 0.
Cont…
2. Binary semaphore:
 The value of a semaphore ranges between 
0
and 
1
. It is similar to mutex lock, but mutex is a locking mechanism,
whereas the semaphore is a signaling mechanism. In binary
semaphore, if a process wants to access the resource, it
performs 
the wait()
 operation on the semaphore and decrements
the value of the semaphore from 1 to 0. When it releases the
resource, it performs a 
signal()
 operation on the semaphore and
increments its value to 1. Suppose the value of the semaphore is 0
and a process wants to access the resource. In that case, it
performs 
wait()
 operation and block itself till the current process
utilizing the resources releases the resource.
Programing Language
Support(Monitor)
It is a synchronization technique that enables threads
to mutual exclusion and the 
wait()
 for a given
condition to become true. It is an abstract data type. It
has a shared variable and a collection of procedures
executing on the shared variable. A process may not
directly access the shared data variables, and
procedures are required to allow several processes to
access the shared data variables simultaneously.
At any particular time, only one process may be active
in a monitor. Other processes that require access to the
shared variables must queue and are only granted
access after the previous process releases the shared
variables.
Cont…
Components of Monitor
There are four main components of the monitor:
Initialization
Private data
Monitor procedure
Monitor entry queue
Initialization: - 
Initialization comprises the code, and when the monitors are
created, we use this code exactly once.
Private Data:
 - Private data is another component of the monitor. It comprises
all the private data, and the private data contains private procedures that can
only be used within the monitor. So, outside the monitor, private data is not
visible.
Monitor Procedure:
 - Monitors Procedures are those procedures that can be
called from outside the monitor.
Monitor Entry Queue: - 
Monitor entry queue is another essential component
of the monitor that includes all the threads, which are called procedures.
Cont.…
Syntax of monitor
Cont.…
Characteristics of Monitors.
1.
Inside the monitors, we can only execute one process at a time.
2.
Monitors are the group of procedures, and condition variables
that are merged together in a special type of module.
3. 
If the process is running outside the monitor, then it cannot
access the monitor’s internal variable. But a process can call the
procedures of the monitor.
4.
 Monitors offer high-level of synchronization
5.
 Monitors were derived to simplify the complexity of
synchronization problems.
6.
 There is only one process that can be active at a time inside the
monitor.
Classical Synchronization Problem
Reader/Writer Problem
Producer and Consumer Problem
Inter-process Communication(Pipes, Shared
memory: System V)
Reader/Writer Problem
The readers-writers problem relates to an object such as a file that is
shared between multiple processes. Some of these processes are readers
i.e. they only want to read the data from the object and some of the
processes are writers i.e. they want to write into the object.
The readers-writers problem is used to manage synchronization so that
there are no problems with the object data. For example - If two readers
access the object at the same time there is no problem. However if two
writers or a reader and writer access the object at the same time, there
may be problems.
To solve this situation, a writer should get exclusive access to an object i.e.
when a writer is accessing the object, no reader or writer may access it.
However, multiple readers can access the object at the same time.
This can be implemented using semaphores. The codes for the reader and
writer process in the reader-writer problem are given as follows −
Reader Process
The code that defines the reader process is given below −
wait (mutex);
rc ++;
if (rc == 1)
wait (wrt);
signal(mutex);
. . READ THE OBJECT .
wait(mutex);
rc --;
if (rc == 0)
signal (wrt);
signal(mutex);
Writer Process
The code that defines the writer process is given
below:
wait(wrt);
.
.
WRITE INTO THE OBJECT
.
 signal(wrt);
Producer and Consumer Problem
The Producer-Consumer problem is a classical multi-
process synchronization problem, that is we are trying to
achieve synchronization between more than one process.
There is one Producer in the producer-consumer problem,
Producer is producing some items, whereas there is one
Consumer that is consuming the items produced by the
Producer. The same memory buffer is shared by both
producers and consumers which is of fixed-size.
The task of the Producer is to produce the item, put it into
the memory buffer, and again start producing items.
Whereas the task of the Consumer is to consume the item
from the memory buffer.
Cont…
The producer should produce data only when the
buffer is not full. In case it is found that the buffer
is full, the producer is not allowed to store any
data into the memory buffer.
Data can only be consumed by the consumer if
and only if the memory buffer is not empty. In
case it is found that the buffer is empty, the
consumer is not allowed to use any data from the
memory buffer.
Accessing memory buffer should not be allowed
to producer and consumer at the same time.
Cont…
Producer Code
    
Consumer Code
  
 
Inter-process Communication(Pipes, Shared
memory: System V)
Pipe
 is a technique used for 
inter process communication
. A pipe is a
mechanism by which the output of one process is directed into the input of
another process. Thus it provides one way flow of data between two related
processes.
Although pipe can be accessed like an 
ordinary file
, the system actually
manages it as 
FIFO queue
. A pipe file is created using the pipe system call. A
pipe has an input end and an output end. One can write into a pipe from input
end and read from the output end. A pipe descriptor, has an array that stores
two pointers, one pointer is for its input end and the other pointer is for its
output end.
Suppose 
two processes
, Process A and Process B, need to communicate. In
such a case, it is important that the process which writes, closes its read end of
the pipe and the process which reads, closes its write end of a pipe.
Essentially, for a communication from Process A to Process B the following
should happen.
Process A should keep its write end open and close the read end of the pipe.
Process B should keep its read end open and close its write end. When a pipe
is created, it is given a fixed size in bytes.
Cont…
When a process attempts to write into the pipe, the write request is
immediately executed if the pipe is not full.
However, if pipe is full the process is blocked until the state of pipe changes.
Similarly, a reading process is blocked, if it attempts to read more bytes that
are currently in pipe, otherwise the reading process is executed. Only one
process can access a pipe at a time
Shared Memory
In the Shared Memory system, the cooperating processes communicate, to
exchange the data with each other. Because of this, the cooperating processes
establish a shared region in their memory. The processes share data by
reading and writing the data in the shared segment of the processes.
Cont…
Let the two cooperating processes P1 and P2. Both the processes
P1 and P2, have their different address spaces. Now let us assume,
P1 wants to share some data with P2.
So, P1 and P2 will have to perform the following steps −
Step 1 
− Process P1 has some data to share with process P2. First P1
takes initiative and establishes a shared memory region in its own
address space and stores the data or information to be shared in its
shared memory region.
Step 2 
− Now, P2 requires the information stored in the shared
segment of P1. So, process P2 needs to attach itself to the shared
address space of P1. Now, P2 can read out the data from there.
Step 3 
− The two processes can exchange information by reading
and writing data in the shared segment of the process.
Deadlock
Characterization
1.Mutual Exclusion: A resource may be acquired exclusively by only one process at
a time.
 2. Hold and Wait: 
A process can hold multiple resources and still request more
resources from other processes which are holding them.
Cont..
 3. No Preemption: A resource cannot be preempted from a process by force. A
process can only release a resource voluntarily. In the diagram below, Process 2 cannot
preempt Resource 1 from Process 1. It will only be released when Process 1
relinquishes it voluntarily after its execution is complete.
4.
 Circular Wait: A process is waiting for the resource held by the second process,
which is waiting for the resource held by the third process and so on, till the last
process is waiting for a resource held by the first process. This forms a circular
chain. For example: Process 1 is allocated Resource2 and it is requesting Resource
1. Similarly, Process 2 is allocated Resource 1 and it is requesting Resource 2. This
forms a circular wait loop
Methods for Handling Deadlock
1.
Deadlock prevention: The strategy of deadlock prevention is to design the
system in such a way that the possibility of deadlock is excluded. Indirect
method prevent the occurrence of one of three necessary condition of
deadlock i.e., mutual exclusion, no pre-emption and hold and wait
2.
Deadlock avoidance: In deadlock avoidance, the operating system checks
whether the system is in safe state or in unsafe state at every step which the
operating system performs. The process continues until the system is in safe
state. Once the system moves to unsafe state, the OS has to backtrack one
step.
3.
Deadlock detection: Deadlock detection is used by employing an algorithm
that tracks the circular waiting and killing one or more processes so that
deadlock is removed. The system state is examined periodically to determine
if a set of processes is deadlocked. A deadlock is resolved by aborting and
restarting a process, relinquishing all the resources that the process held.
4.
Deadlock recovery: This approach let the processes fall in deadlock and then
periodically check whether deadlock occur in the system or not. If it occurs
then it applies some of the recovery methods to the system to get rid of
deadlock.
Deadlock Avoidance: Bankers Algo
Requires that the system has some additional 
a priori
information available.
Simplest and most useful model requires that each
process declare the 
maximum number
 of resources of
each type that it may need.
The deadlock-avoidance algorithm dynamically
examines the resource-allocation state to ensure that
there can never be a circular-wait condition.
Resource-allocation 
state
 is defined by the number of
available and allocated resources, and the maximum
demands of the processes.
Data Structure for Bankers Algo
Let 
n
 = number of processes, and 
m 
= number of
resources types.
Available
:
  Vector of length 
m
. If available [
j
] = 
k
,
there are
 k
 instances of resource type 
R
j
  
available.
Max
: n x m
 matrix.  If 
Max 
[
i,j
] = 
k
, then process 
P
i
may request at most
 k 
instances of resource type 
R
j
.
Allocation
:  n 
x
 m
 matrix.  If Allocation[
i,j
] = 
k
 then
 P
i
is currently allocated 
k
 instances of 
R
j.
Need
:  n 
x
 m
 matrix. If 
Need
[
i,j
] =
 k
, then
 P
i
 may need
k
 more instances of 
R
j
 
to complete its task.
Need
 [
i,j]
 = 
Max
[
i,j
] – 
Allocation
 [
i,j
].
Safety Algorithm
Let 
Work
 
and 
Finish
 be vectors of length
 m
 and
 n
,
respectively.  Initialize:
Work 
= 
Available
Finish 
[
i
] =
 false 
for
 i
 = 0, 1, …, 
n- 
1
.
2.
 
Find and 
i 
such that both:
(a) 
Finish
 [
i
] = 
false
(b) 
Need
i
 
 
Work
If no such 
i 
exists, go to step 4.
3.
 
Work
 = 
Work 
+ 
Allocation
i
Finish
[
i
] =
 true
go to step 2.
4.
 
If 
Finish
 [
i
] == true for all 
i
, then the system is in a
safe state.
Resource-Request Algorithm for
Process 
P
i
Request
 = request vector for process 
P
i
.  If 
Request
i
[
j
] = 
k
 then process 
P
i
 wants 
k
 instances of
resource type 
R
j
.
1.
 
If 
Requesti
 
 
Need
i
 
go to step 2.  Otherwise, raise error
condition, since process has exceeded its maximum
claim.
2.
 
If 
Requesti
 
 
Available
, go to step 3.  Otherwise 
P
i
must wait, since resources are not available.
3.
 
Pretend to allocate requested resources to 
P
i
 by
modifying the state as follows:
  
Available
 = 
Available  
 Request;
  
Allocation
i
 
= 
Allocation
i
 + 
Requesti
;
  
Need
i
 
=
 Need
i
Requesti
;
l
If safe 
 the resources are allocated to Pi.
l
If unsafe 
 Pi must wait, and the old resource-allocation
state is restored
Example
Consider a system that contains five processes
P1, P2, P3, P4, P5 and the three resource
types A, B and C. Following are the resources
types: A has 10, B has 5 and the resource type
C has 7 instances.
Cont…
Answer the following questions using the
banker's algorithm:
What is the reference of the need matrix?
Determine if the system is safe or not.
What will happen if the resource request (1, 0,
0) for process P1 can the system accept this
request immediately?
Cont…
Ans. 2:
 Context of the need matrix is as follows:
Need [i] = Max [i] - Allocation [i]
Need for P1: (7, 5, 3) - (0, 1, 0) = 7, 4, 3
Need for P2: (3, 2, 2) - (2, 0, 0) = 1, 2, 2
Need for P3: (9, 0, 2) - (3, 0, 2) = 6, 0, 0
Need for P4: (2, 2, 2) - (2, 1, 1) = 0, 1, 1
Need for P5: (4, 3, 3) - (0, 0, 2) = 4, 3, 1
Cont…
Ans. 2: Apply the Banker's Algorithm:
Available Resources of A, B and C are 3, 3, and 2.
Now we check if each type of resource request is available for each
process.
Step 1:
 For Process P1:
Need <= Available
7, 4, 3 <= 3, 3, 2 condition is 
false
.
So, we examine another process, P2.
Step 2:
 For Process P2:
Need <= Available
1, 2, 2 <= 3, 3, 2 condition 
true
New available = available + Allocation
(3, 3, 2) + (2, 0, 0) => 5, 3, 2
Similarly, we examine another process P3.
Step 3:
 For Process P3:
P3 Need <= Available
6, 0, 0 < = 5, 3, 2 condition is 
false
.
Cont…
Similarly, we examine another process, P4.
Step 4:
 For Process P4:
P4 Need <= Available
0, 1, 1 <= 5, 3, 2 condition is 
true
New Available resource = Available + Allocation
5, 3, 2 + 2, 1, 1 => 7, 4, 3
Similarly, we examine another process P5.
Step 5:
 For Process P5:
P5 Need <= Available
4, 3, 1 <= 7, 4, 3 condition is 
true
New available resource = Available + Allocation
7, 4, 3 + 0, 0, 2 => 7, 4, 5
Now, we again examine each type of resource request for processes P1 and P3.
Step 6:
 For Process P1:
P1 Need <= Available
7, 4, 3 <= 7, 4, 5 condition is 
true
New Available Resource = Available + Allocation
7, 4, 5 + 0, 1, 0 => 7, 5, 5
So, we examine another process P2.
Cont…
Step 7:
 For Process P3:
P3 Need <= Available
6, 0, 0 <= 7, 5, 5 condition is true
New Available Resource = Available + Allocation
7, 5, 5 + 3, 0, 2 => 10, 5, 7
Hence, we execute the banker's algorithm to find the safe
state and the safe sequence like P2, P4, P5, P1 and P3.
Ans. 3:
 For granting the Request (1, 0, 2), first we have to
check that 
Request <= Available
, that is (1, 0, 2) <= (3, 3, 2),
since the condition is true. So the process P1 gets the
request immediately.
Example
Safe sequence: < P1, P4, P2, P3, P0>
Example
Safe sequence
: < P1, P4, P2, P3, P0 ><
P
1,
P
4,
P
2,
P
3,
P
0>
Slide Note
Embed
Share

Process coordination involves managing interactions between threads and processes to ensure smooth execution. Mutual exclusion is crucial to preventing conflicts in critical sections. Hardware support and special instructions play a key role in achieving mutual exclusion. Race conditions and critical sections highlight the importance of synchronized access to shared resources.

  • Process Coordination
  • Mutual Exclusion
  • Concurrency
  • Hardware Support
  • Race Conditions

Uploaded on Dec 11, 2024 | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.

E N D

Presentation Transcript


  1. Unit No 03 Process Coordination

  2. Synchronization Principle of concurrency 1. Concurrency in multithreading: An interaction between multiple thread running in one process 2. Concurrency in multiprogramming: An interaction between multiple process running on one CPU 3. Concurrency in multiprocessor between multiple CPU running in multiple process 4. Concurrency in multi-computer An interaction between multiple distributed process or thread. An interaction computer running in

  3. Requirement for Mutual Exclusion Only one process at a time is allowed in the critical section for a resource A process that halts in its non-critical section must do so without interfering with other processes No deadlock or starvation A process must not be delayed access to a critical section when there is no other process using it No assumptions are made about relative process speeds or number of processes A process remains inside its critical section for a finite time only

  4. Mutual Exclusion: Hardware Support Interrupt Disabling In general: A process runs until it invokes an operating system service or until it is interrupted Uni-processor: Disabling interrupts exclusion Processor is limited in its ability to interleave programs Multiprocessing disabling interrupts on one processor will not guarantee mutual exclusion Special Machine Instructions Performed in a single instruction cycle Access to the memory location is blocked for any other instructions guarantees mutual

  5. Cont Test and Set Instruction boolean testset (int i) { if (i == 0) { i = 1; return true; } else { return false; } } Exchange Instruction void exchange(int register, int memory) { int temp; temp = memory; memory = register; register = temp; }

  6. Critical Section Problem Race Condition The critical section is a code segment where the shared variables can be accessed. An atomic action is required in a critical section i.e. only one process can execute in its critical section at a time. All the other processes have to wait to execute in their critical sections. A diagram that demonstrates the critical section is as follows

  7. Cont Entry Section: Ii is block of code executed in preparation for entering critical section. Exit Section: the code executed upon leaving the critical section. Remainder Section: Rest of the code. Solution to the Critical Section Problem The critical section problem needs a solution to synchronize the different processes. The solution to the critical section problem must satisfy the following conditions Mutual Exclusion Mutual exclusion implies that only one process can be inside the critical section at any time. If any other processes require the critical section, they must wait until it is free. Progress Progress means that if a process is not using the critical section, then it should not stop any other process from accessing it. In other words, any process can enter a critical section if it is free. Bounded Waiting Bounded waiting means that each process must have a limited waiting time. Itt should not wait endlessly to access the critical section.

  8. Mutual Exclusion: Mutex Mutex is a mutual exclusion object that synchronizes access to a resource. It is created with a unique name at the start of a program. The mutex locking mechanism ensures only one thread can acquire the mutex and enter the critical section. This thread only releases the mutex when it exits in the critical section. wait (mutex); ..... Critical Section ..... signal (mutex);

  9. Mutual Exclusion: Semaphores Semaphore is simply a variable that is non-negative and shared between threads. A semaphore is a signaling mechanism, and another thread can signal a thread that is waiting on a semaphore.

  10. Cont A semaphore uses two atomic operations, 1. Wait: The wait operation decrements the value of its argument S if it is positive. If S is negative or zero, then no operation is performed. wait(S) { while (S<=0); S--; } 2. Signal for the process synchronization: The signal operation increments the value of its argument S. signal(S) { S++; }

  11. Cont Types of Semaphore Semaphore is distinguished by the operating system in two categories Counting semaphore and Binary semaphore. 1. Counting Semaphore: The semaphore S value is initialized to the number of resources present in the system. Whenever a process wants to access the resource, it performs wait()operation on the semaphore and decrements the semaphore value by one. When it releases the resource, it performs the signal() operation on the semaphore and increments the semaphore value by one. When the semaphore count goes to 0, it means the processes occupy all resources. A process needs to use a resource when the semaphore count is 0. It executes the wait() operation and gets blocked until the semaphore value becomes greater than 0. the

  12. Cont 2. Binary semaphore: The value of a semaphore ranges between 0 and 1. It is similar to mutex lock, but mutex is a locking mechanism, whereas the semaphore is a signaling mechanism. In binary semaphore, if a process wants to access the resource, it performs the wait() operation on the semaphore and decrements the value of the semaphore from 1 to 0. When it releases the resource, it performs a signal() operation on the semaphore and increments its value to 1. Suppose the value of the semaphore is 0 and a process wants to access the resource. In that case, it performs wait() operation and block itself till the current process utilizing the resources releases the resource.

  13. Programing Language Support(Monitor) It is a synchronization technique that enables threads to mutual exclusion and the wait() for a given condition to become true. It is an abstract data type. It has a shared variable and a collection of procedures executing on the shared variable. A process may not directly access the shared data variables, and procedures are required to allow several processes to access the shared data variables simultaneously. At any particular time, only one process may be active in a monitor. Other processes that require access to the shared variables must queue and are only granted access after the previous process releases the shared variables.

  14. Cont Components of Monitor There are four main components of the monitor: Initialization Private data Monitor procedure Monitor entry queue Initialization: - Initialization comprises the code, and when the monitors are created, we use this code exactly once. Private Data: - Private data is another component of the monitor. It comprises all the private data, and the private data contains private procedures that can only be used within the monitor. So, outside the monitor, private data is not visible. Monitor Procedure: - Monitors Procedures are those procedures that can be called from outside the monitor. Monitor Entry Queue: - Monitor entry queue is another essential component of the monitor that includes all the threads, which are called procedures.

  15. Cont. Syntax of monitor

  16. Cont. Characteristics of Monitors. 1.Inside the monitors, we can only execute one process at a time. 2.Monitors are the group of procedures, and condition variables that are merged together in a special type of module. 3. If the process is running outside the monitor, then it cannot access the monitor s internal variable. But a process can call the procedures of the monitor. 4. Monitors offer high-level of synchronization 5. Monitors were derived to simplify the complexity of synchronization problems. 6. There is only one process that can be active at a time inside the monitor.

  17. Classical Synchronization Problem Reader/Writer Problem Producer and Consumer Problem Inter-process Communication(Pipes, Shared memory: System V)

  18. Reader/Writer Problem The readers-writers problem relates to an object such as a file that is shared between multiple processes. Some of these processes are readers i.e. they only want to read the data from the object and some of the processes are writers i.e. they want to write into the object. The readers-writers problem is used to manage synchronization so that there are no problems with the object data. For example - If two readers access the object at the same time there is no problem. However if two writers or a reader and writer access the object at the same time, there may be problems. To solve this situation, a writer should get exclusive access to an object i.e. when a writer is accessing the object, no reader or writer may access it. However, multiple readers can access the object at the same time. This can be implemented using semaphores. The codes for the reader and writer process in the reader-writer problem are given as follows

  19. Reader Process The code that defines the reader process is given below wait (mutex); rc ++; if (rc == 1) wait (wrt); signal(mutex); . . READ THE OBJECT . wait(mutex); rc --; if (rc == 0) signal (wrt); signal(mutex);

  20. Writer Process The code that defines the writer process is given below: wait(wrt); . . WRITE INTO THE OBJECT . signal(wrt);

  21. Producer and Consumer Problem The Producer-Consumer problem is a classical multi- process synchronization problem, that is we are trying to achieve synchronization between more than one process. There is one Producer in the producer-consumer problem, Producer is producing some items, whereas there is one Consumer that is consuming the items produced by the Producer. The same memory buffer is shared by both producers and consumers which is of fixed-size. The task of the Producer is to produce the item, put it into the memory buffer, and again start producing items. Whereas the task of the Consumer is to consume the item from the memory buffer.

  22. Cont The producer should produce data only when the buffer is not full. In case it is found that the buffer is full, the producer is not allowed to store any data into the memory buffer. Data can only be consumed by the consumer if and only if the memory buffer is not empty. In case it is found that the buffer is empty, the consumer is not allowed to use any data from the memory buffer. Accessing memory buffer should not be allowed to producer and consumer at the same time.

  23. Cont Producer Code Consumer Code

  24. Inter-process Communication(Pipes, Shared memory: System V) A Pipe is a technique used for inter process communication. A pipe is a mechanism by which the output of one process is directed into the input of another process. Thus it provides one way flow of data between two related processes. Although pipe can be accessed like an ordinary file, the system actually manages it as FIFO queue. A pipe file is created using the pipe system call. A pipe has an input end and an output end. One can write into a pipe from input end and read from the output end. A pipe descriptor, has an array that stores two pointers, one pointer is for its input end and the other pointer is for its output end. Suppose two processes, Process A and Process B, need to communicate. In such a case, it is important that the process which writes, closes its read end of the pipe and the process which reads, closes its write end of a pipe. Essentially, for a communication from Process A to Process B the following should happen. Process A should keep its write end open and close the read end of the pipe. Process B should keep its read end open and close its write end. When a pipe is created, it is given a fixed size in bytes.

  25. Cont When a process attempts to write into the pipe, the write request is immediately executed if However, if pipe is full the process is blocked until the state of pipe changes. Similarly, a reading process is blocked, if it attempts to read more bytes that are currently in pipe, otherwise the reading process is executed. Only one process can access a pipe at a time the pipe is not full.

  26. Shared Memory In the Shared Memory system, the cooperating processes communicate, to exchange the data with each other. Because of this, the cooperating processes establish a shared region in their memory. The processes share data by reading and writing the data in the shared segment of the processes.

  27. Cont Let the two cooperating processes P1 and P2. Both the processes P1 and P2, have their different address spaces. Now let us assume, P1 wants to share some data with P2. So, P1 and P2 will have to perform the following steps Step 1 Process P1 has some data to share with process P2. First P1 takes initiative and establishes a shared memory region in its own address space and stores the data or information to be shared in its shared memory region. Step 2 Now, P2 requires the information stored in the shared segment of P1. So, process P2 needs to attach itself to the shared address space of P1. Now, P2 can read out the data from there. Step 3 The two processes can exchange information by reading and writing data in the shared segment of the process.

  28. Deadlock Characterization 1.Mutual Exclusion: A resource may be acquired exclusively by only one process at a time. 2. Hold and Wait: A process can hold multiple resources and still request more resources from other processes which are holding them.

  29. Cont.. 3. No Preemption: A resource cannot be preempted from a process by force. A process can only release a resource voluntarily. In the diagram below, Process 2 cannot preempt Resource 1 from Process 1. It will only be released when Process 1 relinquishes it voluntarily after its execution is complete. 4. Circular Wait: A process is waiting for the resource held by the second process, which is waiting for the resource held by the third process and so on, till the last process is waiting for a resource held by the first process. This forms a circular chain. For example: Process 1 is allocated Resource2 and it is requesting Resource 1. Similarly, Process 2 is allocated Resource 1 and it is requesting Resource 2. This forms a circular wait loop

  30. Methods for Handling Deadlock 1. Deadlock prevention: The strategy of deadlock prevention is to design the system in such a way that the possibility of deadlock is excluded. Indirect method prevent the occurrence of one of three necessary condition of deadlock i.e., mutual exclusion, no pre-emption and hold and wait Deadlock avoidance: In deadlock avoidance, the operating system checks whether the system is in safe state or in unsafe state at every step which the operating system performs. The process continues until the system is in safe state. Once the system moves to unsafe state, the OS has to backtrack one step. Deadlock detection: Deadlock detection is used by employing an algorithm that tracks the circular waiting and killing one or more processes so that deadlock is removed. The system state is examined periodically to determine if a set of processes is deadlocked. A deadlock is resolved by aborting and restarting a process, relinquishing all the resources that the process held. Deadlock recovery: This approach let the processes fall in deadlock and then periodically check whether deadlock occur in the system or not. If it occurs then it applies some of the recovery methods to the system to get rid of deadlock. 2. 3. 4.

  31. Deadlock Avoidance: Bankers Algo Requires that the system has some additional a priori information available. Simplest and most useful model requires that each process declare the maximum number of resources of each type that it may need. The deadlock-avoidance algorithm dynamically examines the resource-allocation state to ensure that there can never be a circular-wait condition. Resource-allocation state is defined by the number of available and allocated resources, and the maximum demands of the processes.

  32. Data Structure for Bankers Algo Let n = number of processes, and m = number of resources types. Available: Vector of length m. If available [j] = k, there are k instances of resource type Rjavailable. Max: n x m matrix. If Max [i,j] = k, then process Pi may request at most k instances of resource type Rj. Allocation: n x m matrix. If Allocation[i,j] = k then Pi is currently allocated k instances of Rj. Need: n x m matrix. If Need[i,j] = k, then Pi may need k more instances of Rjto complete its task. Need [i,j] = Max[i,j] Allocation [i,j].

  33. Safety Algorithm Let Workand Finish be vectors of length m and n, respectively. Initialize: Work = Available Finish [i] = false for i= 0, 1, , n- 1. 2. Find and i such that both: (a) Finish [i] = false (b) Needi Work If no such i exists, go to step 4. 3. Work = Work + Allocationi Finish[i] = true go to step 2. 4. If Finish [i] == true for all i, then the system is in a safe state.

  34. Resource-Request Algorithm for Process Pi Request = request vector for process Pi. If Requesti [j] = k then process Pi wants k instances of resource type Rj. 1. If Requesti Needigo to step 2. Otherwise, raise error condition, since process has exceeded its maximum claim. 2. If Requesti Available, go to step 3. Otherwise Pi must wait, since resources are not available. 3. Pretend to allocate requested resources to Pi by modifying the state as follows: Available = Available Request; Allocationi= Allocationi + Requesti; Needi= Needi Requesti; If safe the resources are allocated to Pi. If unsafe Pi must wait, and the old resource-allocation state is restored

  35. Example Consider a system that contains five processes P1, P2, P3, P4, P5 and the three resource types A, B and C. Following are the resources types: A has 10, B has 5 and the resource type C has 7 instances.

  36. Cont Answer the following questions using the banker's algorithm: What is the reference of the need matrix? Determine if the system is safe or not. What will happen if the resource request (1, 0, 0) for process P1 can the system accept this request immediately?

  37. Cont Ans. 2: Context of the need matrix is as follows: Need [i] = Max [i] - Allocation [i] Need for P1: (7, 5, 3) - (0, 1, 0) = 7, 4, 3 Need for P2: (3, 2, 2) - (2, 0, 0) = 1, 2, 2 Need for P3: (9, 0, 2) - (3, 0, 2) = 6, 0, 0 Need for P4: (2, 2, 2) - (2, 1, 1) = 0, 1, 1 Need for P5: (4, 3, 3) - (0, 0, 2) = 4, 3, 1

  38. Cont Ans. 2: Apply the Banker's Algorithm: Available Resources of A, B and C are 3, 3, and 2. Now we check if each type of resource request is available for each process. Step 1: For Process P1: Need <= Available 7, 4, 3 <= 3, 3, 2 condition is false. So, we examine another process, P2. Step 2: For Process P2: Need <= Available 1, 2, 2 <= 3, 3, 2 condition true New available = available + Allocation (3, 3, 2) + (2, 0, 0) => 5, 3, 2 Similarly, we examine another process P3. Step 3: For Process P3: P3 Need <= Available 6, 0, 0 < = 5, 3, 2 condition is false.

  39. Cont Similarly, we examine another process, P4. Step 4: For Process P4: P4 Need <= Available 0, 1, 1 <= 5, 3, 2 condition is true New Available resource = Available + Allocation 5, 3, 2 + 2, 1, 1 => 7, 4, 3 Similarly, we examine another process P5. Step 5: For Process P5: P5 Need <= Available 4, 3, 1 <= 7, 4, 3 condition is true New available resource = Available + Allocation 7, 4, 3 + 0, 0, 2 => 7, 4, 5 Now, we again examine each type of resource request for processes P1 and P3. Step 6: For Process P1: P1 Need <= Available 7, 4, 3 <= 7, 4, 5 condition is true New Available Resource = Available + Allocation 7, 4, 5 + 0, 1, 0 => 7, 5, 5 So, we examine another process P2.

  40. Cont Step 7: For Process P3: P3 Need <= Available 6, 0, 0 <= 7, 5, 5 condition is true New Available Resource = Available + Allocation 7, 5, 5 + 3, 0, 2 => 10, 5, 7 Hence, we execute the banker's algorithm to find the safe state and the safe sequence like P2, P4, P5, P1 and P3. Ans. 3: For granting the Request (1, 0, 2), first we have to check that Request <= Available, that is (1, 0, 2) <= (3, 3, 2), since the condition is true. So the process P1 gets the request immediately.

  41. Example Safe sequence: < P1, P4, P2, P3, P0>

  42. Example Safe sequence: < P1, P4, P2, P3, P0 ><P1,P4,P2,P3,P0>

Related


More Related Content

giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#