Introduction to Operating Systems
Process Management Basics is a core concept in Operating System that deals with how the operating system creates, schedules, executes, and terminates processes in a computer system.
Concept of Processes & Thread
1. Concept of
Process
In Operating
System, a process is a program that is currently executing.
It is an active entity that contains the program code and its current activity.
Components of a Process
A process
typically consists of:
1.
Program Code (Text Section) – Instructions to be executed
2.
Program Counter (PC) – Address of the next instruction
3.
CPU Registers – Temporary data used during execution
4.
Stack – Stores function calls, local variables
5.
Heap – Memory used for dynamic allocation
6.
Data Section – Global and static variables
Example
When you open
applications like:
- Google Chrome
- Microsoft Word
each running
application becomes a separate process managed by the operating system.
Characteristics of a Process
- Each process has a unique Process ID
(PID)
- Processes have separate memory spaces
- They can run independently
- Process switching requires context
switching
2. Concept of Thread
A thread is
the smallest unit of CPU execution inside a process.
It is sometimes called a lightweight process.
Multiple
threads can exist within a single process and share the same resources.
Features of Threads
- Share same memory space
- Share files and resources
- Each thread has its own stack and program
counter
- Faster creation and switching compared to
processes
Example
In Google Chrome:
- One thread loads the webpage
- Another thread handles user input
- Another thread runs scripts
This allows
tasks to run simultaneously within the same application.
3. Types of Threads
1. User-Level Threads
- Managed by user-level libraries
- Faster to create and manage
- OS does not directly manage them
2. Kernel-Level Threads
- Managed directly by the operating system
kernel
- OS schedules them for execution
Example operating systems
supporting kernel threads:
- Linux
- Windows
4.
Difference between Process and Thread
|
Feature |
Process |
Thread |
|
Definition |
Program in
execution |
Smallest
unit of execution |
|
Memory |
Separate
memory space |
Shared
memory within process |
|
Creation
Time |
Slower |
Faster |
|
Communication |
Uses IPC |
Easier
(shared memory) |
|
Resource
Usage |
Heavyweight |
Lightweight |
- Process = Independent program running in
memory.
- Thread = Smaller execution unit inside a
process.
- Threads improve performance and
- multitasking by allowing parallel execution.
Process State Diagram
In Operating
System, the Process State Diagram shows the different states a process goes
through during its lifecycle and how it moves from one state to another.
Basic
Process States
1.
New
o
The process is being created.
2.
Ready
o
The process is ready to run and waiting for CPU
allocation.
3.
Running
o
The process is currently executing on the CPU.
4.
Waiting / Blocked
o
The process is waiting for an event such as I/O
completion.
5.
Terminated (Exit)
o
The process has finished execution or has been stopped.
Process State Diagram (Text Representation)
Explanation
of Transitions
New → Ready
- Process is admitted into the ready queue.
Ready →
Running
- The CPU scheduler selects the process for
execution.
Running →
Waiting
- The process waits for I/O or another
event.
Waiting →
Ready
- The required event completes (e.g., I/O
finished).
Running →
Ready
- CPU time expires (preemption).
Running →
Terminated
- Process execution finishes.
Key
Idea
The operating
system manages multiple processes by moving them between these states, ensuring
efficient CPU utilization and multitasking.
✅ Short Exam Definition:
A
process state diagram represents the lifecycle of a process and the transitions
between states such as New, Ready, Running, Waiting, and Terminated.
Process Control Block (PCB)
In Operating
System, a Process Control Block (PCB) is a data structure used by the operating
system to store all information about a process.
Whenever a
process is created, the operating system creates a PCB to keep track of that
process during its execution.
Definition
Process Control Block (PCB) is a
structure that contains all the information needed by the operating system to
manage and control a process.
Each process
in the system has its own PCB.
Information
Stored in PCB
A
typical PCB contains the following details:
1. Process ID (PID)
- A unique number assigned to each process.
2. Process State
- Indicates the current state of the
process such as:
- New
- Ready
- Running
- Waiting
- Terminated
3. Program Counter
- Stores the address of the next
instruction to be executed.
4. CPU Registers
- Contains the values of registers used
during process execution.
5. CPU Scheduling Information
- Includes:
- Priority
- Scheduling queue pointers
- CPU scheduling parameters
6. Memory Management Information
- Includes:
- Base and limit registers
- Page tables
- Segment tables
7. I/O Status Information
- List of I/O devices allocated to the
process.
- Open files.
8. Accounting Information
- CPU time used
- Time limits
- Process number
Simple Structure of PCB
Process ID
(PID)
Process State
Program Counter
CPU Registers
CPU Scheduling Info
Memory Management Info
I/O Status Info
Accounting Info
Role of PCB in Context Switching
During
context switching in Linux or Windows:
1.
The current process state is saved in its PCB.
2.
The next process state is loaded from its PCB.
3.
CPU resumes execution of the new process.
Thus, PCB
helps the operating system pause and resume processes efficiently.
✅ Short Exam Answer:
A
Process Control Block (PCB) is a data structure used by the operating system to
store information about a process such as process state, program counter, CPU
registers, memory management information, and I/O status.
Inter-Process Communication (IPC).
In Operating
System, Inter-Process Communication (IPC) refers to the mechanisms
that allow processes to communicate and exchange data with each other.
Since
processes usually run in separate memory spaces, IPC is needed for data
sharing and coordination between them.
Why IPC is Needed
IPC is
important for:
- Information sharing between processes
- Speeding up computation using multiple processes
- Resource sharing (files, printers, etc.)
- Process synchronization
For example,
applications running on systems like Linux or Windows often
communicate using IPC mechanisms.
Types of IPC
There are two main models of IPC:
1. Shared Memory
- Two or more processes share a common
memory area.
- Processes read and write data in the
shared space.
Features
- Faster communication
- Requires synchronization (semaphores,
mutex)
Example
Process A writes data → Process B reads the same data.
2. Message Passing
Processes
communicate by sending and receiving messages.
Two
operations:
- Send(message)
- Receive(message)
Features
- Easier to implement
- No shared memory required
Common IPC Mechanisms
1.
Pipes
o
Used for communication between related processes.
2.
Message Queues
o
Messages are stored in a queue until read.
3.
Shared Memory
o
Multiple processes access the same memory region.
4.
Semaphores
o
Used to control access to shared resources.
5.
Sockets
o
Used for communication between processes over a network.
IPC Model Diagram (Simple)
Process
A -----> Message / Shared Data ----->
Process B
Advantages of IPC
- Enables data exchange between
processes
- Improves system efficiency
- Supports parallel processing
- Helps in process synchronization
✅Short Exam Definition:
Inter-Process
Communication (IPC) is a mechanism provided by the operating system that allows
processes to communicate and synchronize their actions by exchanging data.