CS252: Systems Programming

CS252: Systems Programming
Slide Note
Embed
Share

The process scheduler in an operating system manages the running of multiple processes, providing the illusion of simultaneous execution. Context switches are used to switch between processes. Key assumptions, metrics, and basic approaches in scheduling are explored, including workload assumptions and scheduling metrics like average turnaround time. Different scheduling algorithms like First Come First Serve and Shortest Job First are discussed in depth.

  • Operating System
  • Process Scheduling
  • Context Switch
  • Scheduling Metrics
  • Algorithms

Uploaded on Feb 16, 2025 | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.

E N D

Presentation Transcript


  1. CS252: Systems Programming Ninghui Li Based on Slides by Prof. Gustavo Rodriguez-Rivera and Chapters 7,8 of OSTEP Topic 20: Processes Scheduling

  2. Process Scheduling From the user s point of view, an OS allows running multiple processes simultaneously. In reality, the OS runs one process after another to give the illusion that multiple processes run simultaneously. For single-core computers The Process Scheduler is the OS subsystem that runs one process after the other and decides what process to run next. A Context Switch is the procedure used by the OS to switch from one process to another

  3. Process Scheduling Steps of a Context Switch Save current running process in process table Load next ready process from process table and put registers and PC in the CPU. Context Switches may happen as a result of: A process needs to go to wait state, and therefore, another process can be scheduled. A process yields (gives up) the CPU so another process can run. Timer interrupt ( Only for Preemptive Scheduling)

  4. Thinking About Scheduling What are the key assumptions? What metrics are important? What basic approaches have been used?

  5. Workload Assumptions Workload: processes running in the system Initial (unrealistic assumptions) All jobs arrive at (about) the same time Once started, each jobs runs to completion The run-time of each job is known Jobs do not perform I/O

  6. Scheduling Metrics Average turnaround time (completion time) The turnaround time for a job is the time the job completes minus the time at which the job arrives This is a performance metric There are other fairness metrics, which often conflicts with performance metrics

  7. First Come First Serve Can perform poorly when an early job is a long- running one, e.g., three jobs <100s, 10s, 10s> Known as the convoy effect, a number of relatively- short potential consumers get queued behind a heavyweight resource consumer How to solve the problem?

  8. Shortest Job First Solves the problem due to convoy effect Provable optimal under the assumptions General scheduling principle, can be applied to any system where the perceived turnaround time per customer (or, in our case, a job) matters (think max- 10 item checkout line in supermarket) Fairness is not as good as FCFS What if jobs don t arrive at the same time?

  9. Shortest Time-to-Completion First (STCF) We need to use Preemptive Scheduling A job does not always run to completion once scheduled It can be preempted to have another job run Also known as Preemptive Shortest Job First (PSJF)

  10. A New Metric: Response Time When we jobs that are interactive, things change Response Time: Time a job first runs Time it arrives Cannot have a job waiting until another is done Three metrics now: Turnaround time Response time Fairness

  11. Incorporating I/O Programs without I/O is uninteresting I/O may take long time, during which time jobs/processes are blocked Time to completion can thus be defined as time to next blocking I/O operation Scheduling policies such as STCF requires predicting time to completion

  12. Non Preemptive Scheduling In Non Preemptive Scheduling a context switch (switch from one process to another) happens only when the running process goes to a waiting state or when the process gives up the CPU voluntarily. This is a very primitive type of scheduling. It is also called cooperative scheduling. Expects all programs to play nice . It was used in Windows 3.1 and initial versions of MacOS. The main problem of Non Preemptive Scheduling is that a misbehaved process that loops forever may hold the CPU and prevent other processes from running.

  13. Preemptive Scheduling In Preemptive Scheduling a context switch happens periodically (every 1/100sec or quantum time) as a result of a timer interrupt. A timer interrupt will cause a context switch, that is, the running process to go to ready state and the process that has been the longest in ready state will go to running state. Preemptive scheduling is implemented in UNIX, and Windows 95 and above.

  14. Advantages/Disadvantages of Non Preemptive Scheduling Advantages of Non Preemptive Scheduling: More control on how the CPU is used. Simple to implement. Advantages of Preemptive scheduling: More robust, one process cannot monopolize the CPU Fairness. The OS makes sure that CPU usage is the same by all running process.

  15. Scheduling Policies for Preemptive Scheduling Round Robin Scheduling (aka Time Slicing) Each job runs for time slice (aka scheduling quantum) Implementation of Round Robin Ready processes are in a queue. Every time that a time quantum expires, a process is dequeued from the front of the queue and put in running state The previous running process is added to the end of the queue.

  16. Round Robin Round Robin (cont.) (T=10ms) Process p1 p2 p3 Burst Time 24ms 3ms 3ms P1 10 P1 P2 P3 P1 4 10 3 3 Average completion time = (13+16+30)/3=19.6ms

  17. Quantum Length Analysis What is the impact of quantum length? Assume T=10ms. Process p1 p2 p3 P3 P1 P2 Burst Time 20ms 1ms 1ms P1 10 10 1 1 Average completion time = (11+12+22)/3=15ms

  18. Quantum Length Analysis What if we choose a smaller quantum length?Assume T=1ms. Process p1 p2 p3 P3 P1 P2 1 1 T=1ms Burst Time 20ms 1ms 1ms P1 1 1 P1 P1 P1 1 1 1 Average completion time = (2+3+22)/3=9ms

  19. Quantum Length Analysis The shorter the quantum, the shorter the average completion time. Based on this we could make the quantum time very small to achieve faster response time. What is the catch? We are not considering that context switch takes time. We have to consider the context switch overhead.

  20. Context Switch Overhead The context switch overhead is the time it takes to do a context switch as a portion of the quantum time. Xoverhd% = 100*X/T where Xoverhd% = Context Switch Overhead X = Context Switch Time T = Quantum Time. Assume X=.1ms. For T=10ms Ovhd=100*.1/10=1% For T=2ms Ovhd=100*.1/2=5% For T=.2ms Ovhd=100*.1/.2=50%

  21. Context Switch Overhead Conclusions: The smaller the quantum, the faster the response time (small average completion time) but the larger the context switch overhead. The larger the quantum the smaller the context switch overhead but the larger the response (large average completion time). The standard quantum time is 1/100sec = 10ms that is a compromise between response time and context switch overhead. This may change in the future with faster processors.

  22. CPU Burst Distribution Processes are in waiting state most of the time except for the times they need the CPU, that is usually for short periods of time. These shorts periods of time the processes need the CPU are called CPU bursts. % Distribution of CPU Bursts 90% 10% CPU burst (ms) 10ms

  23. CPU Burst Distribution 90% of CPU bursts are smaller than 10ms. By choosing 10ms to be the quantum length, we make sure that 90% of the CPU burst run until completion without being preempted. This reduces the context switch overhead and reduces the completion time and makes the response time faster.

  24. How to Choose the Size of a Quantum Make the quantum small enough to make the response time smaller (faster). Make the quantum large enough to have an acceptable context switch overhead. Make the quantum large enough so most CPU bursts are able to finish without being preempted.

  25. Towards Multilevel Feedback- Queue Scheduling (MLFQ) Goals: Try to minimize both turnaround time and response time, and doing so without the ability to predict how long a job is How to predict? Prediction is very difficult, especially if it's about the future. --- Niels Bohr

  26. The Multilevel Feedback-Queue Scheduling (MLFQ) Multilevel Feedback-Queue Scheduling Instead of a having a single queue of ready processes, there are multiple queues of different priorities. The scheduler will schedule the ready processes with the highest priority first. Within processes of the same priority round robin is used.

  27. Basic Idea of MLFQ Maintains multiple queues with different priorities. Each ready process belongs to one queue Which queue a process belongs to depends on its history Processes in higher-priority queue are scheduled over those in lower-priorities ones Rule 1: If Priority(A) > Priority(B), A runs (B doesn t). Rule 2: If Priority(A) = Priority(B), A & B run in RR.

  28. How Are Priorities Adjusted? Rule 3: When a job enters the system, it is placed at the highest priority (the topmost queue). Rule 4a: If a job uses up an entire time slice while running, its priority is reduced (i.e., it moves down one queue). Rule 4b: If a job gives up the CPU before the time slice is up, it stays at the same priority level.

  29. Multilevel Feedback-Queue Scheduling Priority Large CPU Burst. Decrease priority 10 P1 P6 P7 9 P2 P5 P8 Small CPU Burst. Increase priority 8 P3 P4 P9 7 P10 P11 P12 What are some problems?

  30. Three Problems Some processes may be starved. Long-running jobs never scheduled Process behavior may change A process starting with CPU intensive may become I/O intensive later Possible to game the scheduler Just gives up CPU before quantum is up

  31. Solutions: The priority boost: Rule 5: After some time period S, move all the jobs in the system to the topmost queue. Solves the first two problems Choosing S Better Accounting: Rule 4: Once a job uses up its time allotment at a given level (regardless of how many times it has given up the CPU), its priority is reduced (i.e., it moves down one queue).

  32. Summary of MLFQ Rule 1: If Priority(A) > Priority(B), A runs (B doesn t). Rule 2: If Priority(A) = Priority(B), A & B run in RR. Rule 3: When a job enters the system, it is placed at the highest priority (the topmost queue). Rule 4: Once a job uses up its time allotment at a given level (regardless of how many times it has given up the CPU), its priority is reduced (i.e., it moves down one queue). Rule 5: After some time period S, move all the jobs in the system to the topmost queue.

  33. More Scheduling Theory MLFQ used in BSD Unix, Solaris, Windows NT and later systems Sometimes with variants Linux currently uses Completely Fair Scheduler (CFS) as the default scheduler Tries to ensure that each process receives a fair amount of CPU, with the fair share value depending on priority, when the process starts, how much CPU time it has used.

  34. Scheduler-Related System Calls in Linux nice() Lower a process static priority getpriority()/setpriority() priorities of a process group sched get_scheduler()/sched_setscheduler() Set scheduling policy and parameters. (Many more starting with sched ; use man -k to learn their names.) Change

  35. Clicker Question 1 (IPC) When several processes that are created independently (not by the same parent) want to communicate (sending and receiving data), the best way to do is via: A. Signals B. PIPE C. FIFO D. Shared anonymous memory mapping E. POSIX shared memory without file backing

  36. Clicker Question 2 (Scheduling) Given two jobs that need to run 10ms, and 50ms each (with the 10ms one arriving first), assuming Round Robin is used with quantum of 3ms, what is the average completion time (ignoring context switch overhead)? A. 25 ms B. 30 ms C. 36.5 ms D. 39.5 ms E. None of the above

  37. Clicker Question 3 (Scheduling) What are weaknesses of Round Robin (each process taking turns to run either to blocking I/O or when time slice is used up)? A. It is unfair to I/O-bound processes B. It is unfair to CPU-bound processes C. The average completion time can be significantly improved D. Both A and C E. Both B and C

  38. Clicker Question 4 (For fun) You have 25 horses, you want to find the 3 fastest horses. In one race you can race 5 horses, and you don't have a timer. What is the minimal number of races needed to find the 3 fastest horses among the 25. Assume that the horses all race consistently, that is, if one horse is faster than another, it will always beat another in each race. C. 7 A. 5 B. 6 D. 8 E. 9 or higher

  39. Review Questions What is a context switch? What may cause a context switch? What are non-preemptive scheduling and preemptive scheduling? What are the pros & cons of each? Which one is used in modern OS? How does the round robin scheduling work?

  40. Review Questions How to compute response time and overhead for a given quantum length. How response time and overhead are affected by quantum length? What are factors for choosing quantum length? Response time, overhead, CPU burst distribution What is Multilevel Feedback-Queue Scheduling? Key idea: use multiple priority queues, reward processes with smaller CPU burst

More Related Content