Process Management in Operating Systems
Process Management is a crucial function of an Operating System (OS) that ensures the efficient execution of programs by managing process creation, execution, termination, and communication. The OS allocates resources, schedules processes, and enables communication between them, ensuring smooth multitasking and system stability.
Process Concepts
What is a Process?
A process is an executing program, consisting of:
- Code (Text Section) – Program instructions.
- Data Section – Variables, constants.
- Heap – Dynamically allocated memory.
- Stack – Stores function calls and local variables.
Process vs. Program
Aspect | Process | Program |
Definition | Active execution of a program | Set of instructions stored in memory |
Dynamic/Static | Dynamic (changes during execution) | Static (stored on disk) |
Example | A running Chrome browser tab | Chrome.exe on the hard drive |
Process States
A process goes through various states during execution:
- New – Process is created but not yet ready to execute.
- Ready – Process is waiting for CPU allocation.
- Running – Process is currently executing on the CPU.
- Waiting – Process is waiting for an event (I/O completion, resource availability).
- Terminated – Process execution is complete.
Example: When you open Microsoft Word, it transitions from New → Ready → Running → Waiting (if printing) → Terminated.
Process Scheduling
What is Process Scheduling?
Process scheduling determines which process gets CPU time. The OS maintains process scheduling queues:
- Job Queue – Stores all system processes.
- Ready Queue – Stores processes ready for execution.
- Waiting Queue – Stores processes waiting for an event (e.g., I/O).
Types of Scheduling
Scheduling Type | Description | Example |
Long-Term Scheduler | Decides which processes enter the system (controls degree of multiprogramming). | Batch processing |
Short-Term Scheduler (CPU Scheduler) | Selects processes from the ready queue for execution. | CPU time-sharing |
Medium-Term Scheduler | Suspends and resumes processes to optimize performance | Virtual memory swapping. |
CPU Scheduling Algorithms
Algorithm | Description | Pros | Cons |
First Come First Serve (FCFS) | Executes processes in order of arrival. | Simple, fair | Can cause long waiting times (Convoy Effect) |
Shortest Job Next (SJN) | Runs shortest process first. | Efficient, reduces waiting time | Difficult to predict job length |
Round Robin (RR) | Allocates a fixed time slice (quantum) to each process. | Ensures fairness | High overhead due to frequent context switching. |
Priority Scheduling | Assigns priority to each process. | Ensures critical processes run first | Can cause starvation (low-priority processes wait indefinitely). |
Multilevel Queue | Divides processes into queues (foreground, background). | Efficient for multitasking | Complex to implement. |
Example: In a time-sharing system, Round Robin is used to give each user a fair share of CPU time.
Operations on Processes
Process Creation
A process is created using system calls (e.g., fork() in UNIX).
Parent and Child Processes:
- The parent process creates a child process.
- The child process can inherit resources or get new ones.
Example: When you open a new tab in Chrome, a new child process is created.
Process Termination
A process terminates when:
- It completes execution.
- It is terminated by another process (e.g., kill command in Linux).
- It encounters a fatal error (e.g., division by zero).
Process States Transition Diagram
New → Ready → Running → Waiting → Ready → Running → Terminated
Process Control Block (PCB)
Each process has a PCB, which stores:
- Process ID (PID)
- Process state (Ready, Running, Waiting)
- Program counter (next instruction to execute)
- CPU registers
- Memory information
- I/O device usage
Purpose: The OS uses the PCB to manage processes and perform context switching.
Inter-Process Communication (IPC)
What is IPC?
Inter-Process Communication (IPC) allows processes to exchange data and signals while running in the system.
Types of IPC
IPC Method | Description | Example |
Shared Memory | Multiple processes access the same memory region. | Client-Server applications, databases. |
Message Passing | Processes send and receive messages via the OS. | Email services, chat applications. |
Pipes | Unidirectional communication between related processes. | Linux command: `ls |
Sockets | Used for communication between processes over a network. | Web browsers, FTP clients |
Signals | Asynchronous notifications sent to processes. | Ctrl + C to terminate a process. |
Example: When you copy text from one application and paste it into another, Shared Memory IPC is used.
IPC Synchronization
When multiple processes share resources, synchronization is needed to avoid conflicts.
Race Condition
Occurs when multiple processes try to modify the same resource simultaneously, leading to unpredictable results.
Example: Two threads updating a bank balance at the same time without synchronization.
Synchronization Techniques
- Semaphores – A counter that controls access to resources.
- Mutex (Mutual Exclusion) – Allows only one process at a time to access a resource.
- Monitors – Encapsulates shared resources and enforces synchronization.
Example: A printer queue uses semaphores to ensure that one process prints at a time.
Comparison of Process Scheduling and IPC
Feature | Process Scheduling | Inter-Process Communication (IPC) |
Purpose | Determines process execution order. | Enables data exchange between processes. |
Types | FCFS, Round Robin, Priority Scheduling. | Shared Memory, Message Passing, Pipes, Sockets. |
OS Role | Allocates CPU time to processes | Manages data transmission between processes. |
Example | Task Manager scheduling CPU time. | WhatsApp messages sent via IPC. |
Conclusion
Process management is a core function of an operating system, enabling multitasking and efficient resource utilization. The OS uses process scheduling to optimize CPU performance and inter-process communication (IPC) to enable collaboration between processes.
Key Takeaways
Process = Running program with distinct states (New, Ready, Running, Waiting, Terminated).
Process Scheduling = OS schedules processes using FCFS, Round Robin, Priority, etc.
Operations on Processes = Creation (fork()), termination, and state transitions.
IPC Methods = Shared Memory, Message Passing, Pipes, Sockets.
Synchronization = Semaphores, Mutex, and Monitors prevent race conditions.
CPU Scheduling: A Complete Guide
CPU scheduling is a critical component of an Operating System (OS) that determines which process gets CPU time. It ensures efficient multitasking, optimizes CPU utilization, and improves system responsiveness.
This blog covers:
- Multithreaded Programming
- Multi-Core Programming
- Multi-Threading Models
- Scheduling Criteria
- Scheduling Algorithms
- Algorithm Evaluation
Multithreaded Programming
What is Multithreading?
Multithreading allows a process to be divided into multiple independent threads, enabling parallel execution.
Example: A web browser runs multiple threads:
- One thread loads the webpage.
- Another thread downloads files.
- Another handles user input.
Benefits of Multithreading
Improves Performance – Threads run concurrently, reducing execution time.
Efficient CPU Utilization – Prevents CPU idle time.
Better Responsiveness – GUI applications remain responsive.
Resource Sharing – Threads share memory and resources within a process.
Threads vs. Processes
Aspect | Thread | Process |
Definition | Lightweight unit of execution | Independent running program |
Memory | Shares memory with parent process | Has its own memory space |
Execution | Runs independently within a process | Runs separately |
Example | Multiple tabs in a browser | Running Chrome, VS Code, and Word |
Multi-Core Programming
Modern CPUs have multiple cores, allowing true parallel execution of threads.
What is Multi-Core Processing?
- A multi-core processor has multiple independent processing units (cores).
- Each core can execute multiple threads simultaneously.
Example:
A quad-core processor can handle four threads at the same time.
Advantages of Multi-Core Programming
Higher Performance – Executes more tasks in parallel.
Energy Efficiency – Uses less power per task compared to single-core CPUs.
Improved Multitasking – Runs multiple applications smoothly.
Challenges in Multi-Core Programming
Thread Synchronization Issues – Threads may interfere with each other.
Load Balancing – Work must be evenly distributed across cores.
Complex Debugging – Difficult to find race conditions and deadlocks.
Multi-Threading Models
Multithreading models define how user threads map to kernel threads.
User Threads vs. Kernel Threads
Type | Managed By | Context Switching | Speed |
User Threads | User-level libraries | Fast (OS doesn’t interfere) | High |
Kernel Threads | OS kernel | Slower (needs OS scheduling) | Medium |
Types of Multi-Threading Models
Model | Description | Example |
Many-to-One | Multiple user threads map to one kernel thread. | Green Threads in Java |
One-to-One | Each user thread has a corresponding kernel thread. | Windows, Linux Pthreads. |
Many-to-Many | Many user threads map to a pool of kernel threads. | Solaris, Windows Thread Pool. |
Example: A web server uses the Many-to-Many model, where multiple user requests are handled by a limited number of kernel threads.
Scheduling Criteria
Scheduling criteria define how the OS evaluates CPU scheduling algorithms.
Key Scheduling Criteria
Criterion | Definition | Goal |
CPU Utilization | Keeps CPU as busy as possible. | Maximize efficiency. |
Throughput | Number of completed processes per second. | Higher is better. |
Turnaround Time | Time taken to complete a process. | Minimize time. |
Waiting Time | Time a process spends in the ready queue. | Reduce delay. |
Response Time | Time from request to first response. | Ensure responsiveness. |
Example: A real-time system prioritizes low response time, while a batch system prioritizes high throughput.
CPU Scheduling Algorithms
The OS uses different algorithms to allocate CPU time efficiently.
Preemptive vs. Non-Preemptive Scheduling
Type | Description | Example |
Preemptive | OS can interrupt a running process. | Round Robin, Priority Scheduling. |
Non-Preemptive | Process runs until completion. | FCFS, SJN. |
Types of Scheduling Algorithms
First Come First Serve (FCFS)
- Executes processes in order of arrival.
- Pros: Simple, fair.
- Cons: Can cause Convoy Effect (slow processes delay fast ones).
Example: A print queue processes jobs one by one in order.
Shortest Job Next (SJN) / Shortest Job First (SJF)
- Executes the shortest process first.
- Pros: Minimum average waiting time.
- Cons: Starvation (long jobs may never execute).
Example: Used in batch processing where job length is known.
Round Robin (RR)
- Each process gets a fixed time slice (quantum) before switching.
- Pros: Ensures fair CPU time allocation.
- Cons: High context-switching overhead.
Example: Used in time-sharing systems (e.g., online gaming servers).
Priority Scheduling
- Higher priority processes execute first.
- Pros: Ensures urgent tasks run first.
- Cons: Can cause Starvation (low-priority processes wait indefinitely).
Example: Emergency system prioritizing critical medical alarms.
Multilevel Queue Scheduling
- Divides processes into different queues (foreground, background).
- Each queue has its own scheduling policy (RR for interactive tasks, FCFS for batch jobs).
Example: Used in Windows task scheduling.
Algorithm Evaluation
How do we evaluate which scheduling algorithm is best?
Evaluation Metrics
Metric | Formula | Goal |
Turnaround Time | Completion Time – Arrival Time | Minimize |
Waiting Time | Turnaround Time – Burst Time | Minimize |
Response Time | First Response – Arrival Time | Minimize |
Performance Comparison of Scheduling Algorithms
Algorithm | Best For | Drawbacks |
FCFS | Simple, batch jobs | Long wait times for short jobs |
SJN | Minimized waiting time | Starvation of long jobs |
RR | Time-sharing systems | Context-switching overhead |
Priority | Urgent tasks | Starvation of low-priority tasks |
Multilevel Queue | Systems with different priority levels | Complex implementation |
Example: Real-time OS (e.g., pacemakers) use priority scheduling, while gaming servers use Round Robin.
Conclusion
CPU scheduling is essential for efficient multitasking and optimal CPU utilization. The OS uses scheduling algorithms to balance performance, fairness, and responsiveness.
Key Takeaways
Multithreading improves speed and resource sharing.
Multi-core processing enables parallel execution.
Scheduling algorithms optimize CPU time allocation.
Round Robin, FCFS, SJF, and Priority Scheduling are commonly used.
Algorithm evaluation ensures fairness and efficiency.