Process Management & CPU Scheduling

tl;dr

You will dive into process management and CPU scheduling, exploring process concepts, scheduling, and inter-process communication. They will learn about multithreaded programming, multi-core processing, and threading models, gaining insights into efficient execution of concurrent processes. The unit also covers CPU scheduling algorithms, evaluation criteria, and their impact on system performance, preparing students to optimize resource allocation and multitasking in modern computing environments.

Table of Contents

Process Management is a crucial function of an Operating System (OS) that ensures the efficient execution of programs by managing process creation, execution, termination, and communication. The OS allocates resources, schedules processes, and enables communication between them, ensuring smooth multitasking and system stability.

What is a Process?

A process is an executing program, consisting of:

  • Code (Text Section) – Program instructions.
  • Data Section – Variables, constants.
  • Heap – Dynamically allocated memory.
  • Stack – Stores function calls and local variables.

Process vs. Program

AspectProcessProgram
DefinitionActive execution of a programSet of instructions stored in memory
Dynamic/StaticDynamic (changes during execution)Static (stored on disk)
ExampleA running Chrome browser tabChrome.exe on the hard drive

Process States

A process goes through various states during execution:

  • New – Process is created but not yet ready to execute.
  • Ready – Process is waiting for CPU allocation.
  • Running – Process is currently executing on the CPU.
  • Waiting – Process is waiting for an event (I/O completion, resource availability).
  • Terminated – Process execution is complete.

Example: When you open Microsoft Word, it transitions from New → Ready → Running → Waiting (if printing) → Terminated.

Process Scheduling

What is Process Scheduling?

Process scheduling determines which process gets CPU time. The OS maintains process scheduling queues:

  • Job Queue – Stores all system processes.
  • Ready Queue – Stores processes ready for execution.
  • Waiting Queue – Stores processes waiting for an event (e.g., I/O).

Types of Scheduling

Scheduling TypeDescriptionExample
Long-Term SchedulerDecides which processes enter the system (controls degree of multiprogramming).Batch processing
Short-Term Scheduler (CPU Scheduler)Selects processes from the ready queue for execution.CPU time-sharing
Medium-Term SchedulerSuspends and resumes processes to optimize performanceVirtual memory swapping.

CPU Scheduling Algorithms

AlgorithmDescriptionProsCons
First Come First Serve (FCFS)Executes processes in order of arrival.Simple, fairCan cause long waiting times (Convoy Effect)
Shortest Job Next (SJN)Runs shortest process first.Efficient, reduces waiting timeDifficult to predict job length
Round Robin (RR)Allocates a fixed time slice (quantum) to each process.Ensures fairnessHigh overhead due to frequent context switching.
Priority SchedulingAssigns priority to each process.Ensures critical processes run firstCan cause starvation (low-priority processes wait indefinitely).
Multilevel QueueDivides processes into queues (foreground, background).Efficient for multitaskingComplex to implement.

Example: In a time-sharing system, Round Robin is used to give each user a fair share of CPU time.

Process Creation

A process is created using system calls (e.g., fork() in UNIX).

Parent and Child Processes:

  • The parent process creates a child process.
  • The child process can inherit resources or get new ones.

Example: When you open a new tab in Chrome, a new child process is created.

Process Termination

A process terminates when:

  • It completes execution.
  • It is terminated by another process (e.g., kill command in Linux).
  • It encounters a fatal error (e.g., division by zero).

Process States Transition Diagram

New → Ready → Running → Waiting → Ready → Running → Terminated

Process Control Block (PCB)

Each process has a PCB, which stores:

  • Process ID (PID)
  • Process state (Ready, Running, Waiting)
  • Program counter (next instruction to execute)
  • CPU registers
  • Memory information
  • I/O device usage

Purpose: The OS uses the PCB to manage processes and perform context switching.

What is IPC?

Inter-Process Communication (IPC) allows processes to exchange data and signals while running in the system.

Types of IPC

IPC MethodDescriptionExample
Shared MemoryMultiple processes access the same memory region.Client-Server applications, databases.
Message PassingProcesses send and receive messages via the OS.Email services, chat applications.
PipesUnidirectional communication between related processes.Linux command: `ls
SocketsUsed for communication between processes over a network.Web browsers, FTP clients
SignalsAsynchronous notifications sent to processes.Ctrl + C to terminate a process.

Example: When you copy text from one application and paste it into another, Shared Memory IPC is used.

IPC Synchronization

When multiple processes share resources, synchronization is needed to avoid conflicts.

Race Condition

Occurs when multiple processes try to modify the same resource simultaneously, leading to unpredictable results.

Example: Two threads updating a bank balance at the same time without synchronization.

Synchronization Techniques

  • Semaphores – A counter that controls access to resources.
  • Mutex (Mutual Exclusion) – Allows only one process at a time to access a resource.
  • Monitors – Encapsulates shared resources and enforces synchronization.

Example: A printer queue uses semaphores to ensure that one process prints at a time.

FeatureProcess SchedulingInter-Process Communication (IPC)
PurposeDetermines process execution order.Enables data exchange between processes.
TypesFCFS, Round Robin, Priority Scheduling.Shared Memory, Message Passing, Pipes, Sockets.
OS RoleAllocates CPU time to processesManages data transmission between processes.
ExampleTask Manager scheduling CPU time.WhatsApp messages sent via IPC.

Process management is a core function of an operating system, enabling multitasking and efficient resource utilization. The OS uses process scheduling to optimize CPU performance and inter-process communication (IPC) to enable collaboration between processes.

Key Takeaways
Process = Running program with distinct states (New, Ready, Running, Waiting, Terminated).
Process Scheduling = OS schedules processes using FCFS, Round Robin, Priority, etc.
  Operations on Processes = Creation (fork()), termination, and state transitions.
  IPC Methods = Shared Memory, Message Passing, Pipes, Sockets.
  Synchronization = Semaphores, Mutex, and Monitors prevent race conditions.

CPU scheduling is a critical component of an Operating System (OS) that determines which process gets CPU time. It ensures efficient multitasking, optimizes CPU utilization, and improves system responsiveness.

This blog covers:

  • Multithreaded Programming
  • Multi-Core Programming
  • Multi-Threading Models
  • Scheduling Criteria
  • Scheduling Algorithms
  • Algorithm Evaluation

What is Multithreading?

Multithreading allows a process to be divided into multiple independent threads, enabling parallel execution.

Example: A web browser runs multiple threads:

  • One thread loads the webpage.
  • Another thread downloads files.
  • Another handles user input.

Benefits of Multithreading

 Improves Performance – Threads run concurrently, reducing execution time.
  Efficient CPU Utilization – Prevents CPU idle time.
  Better Responsiveness – GUI applications remain responsive.
  Resource Sharing – Threads share memory and resources within a process.

Threads vs. Processes

AspectThreadProcess
DefinitionLightweight unit of executionIndependent running program
MemoryShares memory with parent processHas its own memory space
ExecutionRuns independently within a processRuns separately
ExampleMultiple tabs in a browserRunning Chrome, VS Code, and Word

Modern CPUs have multiple cores, allowing true parallel execution of threads.

What is Multi-Core Processing?

  • A multi-core processor has multiple independent processing units (cores).
  • Each core can execute multiple threads simultaneously.

Example:
A quad-core processor can handle four threads at the same time.

Advantages of Multi-Core Programming

 Higher Performance – Executes more tasks in parallel.
Energy Efficiency – Uses less power per task compared to single-core CPUs.
  Improved Multitasking – Runs multiple applications smoothly.

Challenges in Multi-Core Programming

 Thread Synchronization Issues – Threads may interfere with each other.
Load Balancing – Work must be evenly distributed across cores.
  Complex Debugging – Difficult to find race conditions and deadlocks.

Multithreading models define how user threads map to kernel threads.

User Threads vs. Kernel Threads

TypeManaged ByContext SwitchingSpeed
User ThreadsUser-level librariesFast (OS doesn’t interfere)High
Kernel ThreadsOS kernelSlower (needs OS scheduling)Medium

Types of Multi-Threading Models

ModelDescriptionExample
Many-to-OneMultiple user threads map to one kernel thread.Green Threads in Java
One-to-OneEach user thread has a corresponding kernel thread.Windows, Linux Pthreads.
Many-to-ManyMany user threads map to a pool of kernel threads.Solaris, Windows Thread Pool.

Example: A web server uses the Many-to-Many model, where multiple user requests are handled by a limited number of kernel threads.

Scheduling criteria define how the OS evaluates CPU scheduling algorithms.

Key Scheduling Criteria

CriterionDefinitionGoal
CPU UtilizationKeeps CPU as busy as possible.Maximize efficiency.
ThroughputNumber of completed processes per second.Higher is better.
Turnaround TimeTime taken to complete a process.Minimize time.
Waiting TimeTime a process spends in the ready queue.Reduce delay.
Response TimeTime from request to first response.Ensure responsiveness.

Example: A real-time system prioritizes low response time, while a batch system prioritizes high throughput.

The OS uses different algorithms to allocate CPU time efficiently.

Preemptive vs. Non-Preemptive Scheduling

TypeDescriptionExample
PreemptiveOS can interrupt a running process.Round Robin, Priority Scheduling.
Non-PreemptiveProcess runs until completion.FCFS, SJN.

Types of Scheduling Algorithms

First Come First Serve (FCFS)

  • Executes processes in order of arrival.
  • Pros: Simple, fair.
  • Cons: Can cause Convoy Effect (slow processes delay fast ones).

Example: A print queue processes jobs one by one in order.

Shortest Job Next (SJN) / Shortest Job First (SJF)

  • Executes the shortest process first.
  • Pros: Minimum average waiting time.
  • Cons: Starvation (long jobs may never execute).

Example: Used in batch processing where job length is known.

Round Robin (RR)

  • Each process gets a fixed time slice (quantum) before switching.
  • Pros: Ensures fair CPU time allocation.
  • Cons: High context-switching overhead.

Example: Used in time-sharing systems (e.g., online gaming servers).

Priority Scheduling

  • Higher priority processes execute first.
  • Pros: Ensures urgent tasks run first.
  • Cons: Can cause Starvation (low-priority processes wait indefinitely).

Example: Emergency system prioritizing critical medical alarms.

Multilevel Queue Scheduling

  • Divides processes into different queues (foreground, background).
  • Each queue has its own scheduling policy (RR for interactive tasks, FCFS for batch jobs).

Example: Used in Windows task scheduling.

How do we evaluate which scheduling algorithm is best?

Evaluation Metrics

MetricFormulaGoal
Turnaround TimeCompletion Time – Arrival TimeMinimize
Waiting TimeTurnaround Time – Burst TimeMinimize
Response TimeFirst Response – Arrival TimeMinimize

Performance Comparison of Scheduling Algorithms

AlgorithmBest ForDrawbacks
FCFSSimple, batch jobsLong wait times for short jobs
SJNMinimized waiting timeStarvation of long jobs
RRTime-sharing systemsContext-switching overhead
PriorityUrgent tasksStarvation of low-priority tasks
Multilevel QueueSystems with different priority levelsComplex implementation

Example: Real-time OS (e.g., pacemakers) use priority scheduling, while gaming servers use Round Robin.

CPU scheduling is essential for efficient multitasking and optimal CPU utilization. The OS uses scheduling algorithms to balance performance, fairness, and responsiveness.

more from