Operating System

Unit 2 — Memory & Device Management

📚 Chapters

🔒 Chapter 1 — Deadlocks
📊 DEADLOCKS — MIND MAP
Deadlock → Circular wait where processes hold resources & wait for others
4 Conditions → Mutual Exclusion, Hold & Wait, No Preemption, Circular Wait
Prevention → Eliminate any one of the 4 conditions
Avoidance → Banker's Algorithm, Resource Allocation Graph
Detection → Wait-for Graph, Detection Algorithm
Recovery → Process termination, Resource preemption

1.1 What is Deadlock?

📖 Definition: A deadlock is a situation where a set of processes are blocked because each process is holding a resource and waiting for a resource held by another process in the set. No process can proceed — they are all stuck forever.
💡 Real-life Example — Traffic Jam:

Imagine a 4-way intersection where:

  • Car A is blocking Car B's path and waiting for Car C to move
  • Car B is blocking Car C's path and waiting for Car D to move
  • Car C is blocking Car D's path and waiting for Car A to move
  • Car D is blocking Car A's path and waiting for Car B to move

Result → Nobody moves! This circular dependency = Deadlock

💡 OS Example:

Process P1 holds Resource R1 and needs R2.
Process P2 holds Resource R2 and needs R1.
→ Both are waiting for each other → Deadlock!

1.2 Conditions for Deadlock (Coffman Conditions)

⚠️ Key Point: ALL four conditions must hold simultaneously for deadlock to occur. Eliminating even ONE condition prevents deadlock.
1. Mutual Exclusion

At least one resource must be non-shareable — only one process can use it at a time.

Example: A printer can only be used by one process at a time.

2. Hold and Wait

A process is holding at least one resource and waiting to acquire additional resources held by other processes.

Example: P1 holds Printer and is waiting for Scanner.

3. No Preemption

Resources cannot be forcibly taken away from a process. A resource can only be released voluntarily by the process holding it.

Example: OS cannot forcibly take the printer away from P1.

4. Circular Wait

A set of processes {P0, P1, ..., Pn} exist such that P0 is waiting for a resource held by P1, P1 is waiting for P2, ..., Pn is waiting for P0.

Example: P1→P2→P3→P1 (circular chain)

Circular Wait Diagram
Fig 1.1 — Circular Wait causing Deadlock

1.3 Resource Allocation Graph (RAG)

📖 Definition: A Resource Allocation Graph is a directed graph used to describe deadlocks. It shows which process holds which resource and which resource is requested by which process.
RAG Components:
  • Process Node: Represented by a circle (○) → P1, P2, P3
  • Resource Node: Represented by a rectangle (□) → R1, R2, R3
  • Request Edge: Arrow from Process → Resource (Pi → Rj) means Pi is requesting Rj
  • Assignment Edge: Arrow from Resource → Process (Rj → Pi) means Rj is assigned to Pi
RAG WITHOUT Deadlock: RAG WITH Deadlock: (P1) ──request──► [R1] (P1) ──request──► [R1] │ │ assign assign │ │ ▼ ▼ (P2) (P2)──request──►[R2] │ assign │ ▼ (P1) ← Cycle! = Deadlock
Fig 1.2 — Resource Allocation Graph Examples
⚠️ RAG Rule:
  • If graph has NO cycle → No deadlock
  • If graph HAS a cycle → Deadlock MAY exist (definitely if single instance per resource)

1.4 Deadlock Prevention

📖 Definition: Deadlock prevention means designing the system so that at least one of the four necessary conditions for deadlock can never hold. We eliminate the possibility of deadlock occurring.
✅ Method 1 — Eliminate Mutual Exclusion:

Make resources shareable wherever possible.

Problem: Not all resources can be shared (printer, tape drive)

✅ Method 2 — Eliminate Hold & Wait:

Two approaches:

  • Approach 1: Process must request ALL resources before starting execution. If all available → proceed. Else → wait without holding anything.
  • Approach 2: Process can request resources only when it has NONE.

Problem: Low resource utilization, starvation possible

✅ Method 3 — Allow Preemption:

If a process holding resources requests a new resource that is not available:

  • All currently held resources are preempted (taken away)
  • Process added to waiting list
  • Process restarts only when it gets ALL needed resources

Problem: Only works for resources whose state can be saved (CPU registers, memory)

✅ Method 4 — Eliminate Circular Wait:

Impose a total ordering on all resource types. Process can only request resources in increasing order of enumeration.

Example: R1=1, R2=2, R3=3 → Process must request R1 before R2 before R3. No circular wait possible!

❌ Disadvantages of Prevention:
  • Low resource utilization
  • Reduced system throughput
  • Starvation may occur

1.5 Deadlock Avoidance & Safe State

📖 Definition: Deadlock avoidance requires that the OS have additional information about how resources will be requested. The system dynamically examines the resource allocation state to ensure circular wait never exists.
📖 Safe State: A state is safe if the system can allocate resources to each process in some order and still avoid deadlock. A safe sequence {P1, P2, ..., Pn} exists such that for each Pi, the resources Pi can still request can be satisfied by currently available resources plus resources held by all Pj (j < i).
💡 Safe vs Unsafe State:
  • Safe State: System can guarantee completion of all processes → No deadlock
  • Unsafe State: No guarantee → Deadlock MAY occur (not necessarily)
  • Deadlock State: Processes are already stuck → Deadlock occurred
State Relationship Diagram
Fig 1.3 — Safe, Unsafe & Deadlock States

🏦 Banker's Algorithm

📖 Banker's Algorithm: A deadlock avoidance algorithm developed by Dijkstra. It's called Banker's Algorithm because it is analogous to a bank that allocates cash to customers. The bank never allocates cash in a way that leads to an unsafe state.
Data Structures Used:
  • Available[j]: Number of available resources of type j
  • Max[i][j]: Maximum demand of process i for resource type j
  • Allocation[i][j]: Number of resources of type j currently allocated to process i
  • Need[i][j]: Remaining resource need of process i for type j
Key Formula: Need[i][j] = Max[i][j] - Allocation[i][j]
Safety Algorithm (Step-by-step):
  • Step 1: Let Work = Available, Finish[i] = false for all i
  • Step 2: Find index i such that Finish[i] = false AND Need[i] ≤ Work
  • Step 3: Work = Work + Allocation[i], Finish[i] = true → Go to Step 2
  • Step 4: If Finish[i] = true for all i → System is in SAFE STATE
💡 Solved Example (PYQ Pattern):

Given: 5 Processes (P0–P4), 3 Resource Types (A, B, C)
Available: A=3, B=3, C=2

Process Alloc A Alloc B Alloc C Max A Max B Max C Need A Need B Need C
P0010753743
P1200322122
P2302902600
P3211222011
P4002433431

Step 1: Work = [3,3,2], Finish = [F,F,F,F,F]

Step 2: Find process where Need ≤ Work

  • P1: Need=[1,2,2] ≤ [3,3,2] ✓ → Work=[3,3,2]+[2,0,0]=[5,3,2], Finish[1]=T
  • P3: Need=[0,1,1] ≤ [5,3,2] ✓ → Work=[5,3,2]+[2,1,1]=[7,4,3], Finish[3]=T
  • P4: Need=[4,3,1] ≤ [7,4,3] ✓ → Work=[7,4,3]+[0,0,2]=[7,4,5], Finish[4]=T
  • P0: Need=[7,4,3] ≤ [7,4,5] ✓ → Work=[7,4,5]+[0,1,0]=[7,5,5], Finish[0]=T
  • P2: Need=[6,0,0] ≤ [7,5,5] ✓ → Work=[7,5,5]+[3,0,2]=[10,5,7], Finish[2]=T

Safe Sequence: P1 → P3 → P4 → P0 → P2 ✅

1.6 Deadlock Detection

📖 Definition: If neither prevention nor avoidance is used, deadlock may occur. The system must provide an algorithm to detect whether deadlock has occurred and identify the processes/resources involved.

Single Instance Resources — Wait-for Graph

Wait-for Graph:
  • Obtained by removing resource nodes from RAG
  • Pi → Pj means Pi is waiting for Pj to release a resource
  • Deadlock exists if and only if there is a cycle in wait-for graph
  • Algorithm periodically invokes a cycle detection algorithm (O(n²))
RAG: Wait-for Graph: (P1)──►[R1]──►(P2) (P1)──────►(P2) ▲ │ ▲ │ │ [R2] │ │ └──────────────┘ └───────────┘ Cycle → Deadlock!
Fig 1.4 — RAG to Wait-for Graph conversion

Multiple Instance Resources — Detection Algorithm

Detection Algorithm (Similar to Banker's):
  • Step 1: Work = Available, Finish[i] = false (if Allocation[i] ≠ 0), else true
  • Step 2: Find i such that Finish[i] = false AND Request[i] ≤ Work
  • Step 3: Work = Work + Allocation[i], Finish[i] = true → Go to Step 2
  • Step 4: If Finish[i] = false for some i → Process Pi is DEADLOCKED
⚠️ When to invoke detection algorithm?
  • Every time a request cannot be granted immediately
  • At fixed time intervals (every hour)
  • When CPU utilization drops below 40% (sign of deadlock)

1.7 Recovery from Deadlock

📖 Definition: Once a deadlock has been detected, the system must recover. There are two main options: Process Termination or Resource Preemption.
✅ Method 1 — Process Termination:

Option A: Abort all deadlocked processes

  • Breaks deadlock cycle immediately
  • Very expensive — lost computation work

Option B: Abort one process at a time

  • Abort minimum processes to break deadlock
  • Run detection algorithm after each abort
  • Choose victim: lowest priority, least work done, fewest resources
✅ Method 2 — Resource Preemption:
  • Select a victim: Which resources from which process to preempt? (minimize cost)
  • Rollback: Return process to some safe state, restart from there
  • Starvation: Same process may always be picked as victim → must include number of rollbacks in cost factor
Approach Prevention Avoidance Detection
Strategy Deny conditions Safe state check Find & recover
Overhead High Medium Low
Utilization Low Medium High
Starvation Possible Possible Possible
Algorithm Ordering Banker's Wait-for Graph
💾 Chapter 2 — Memory Management
📊 MEMORY MANAGEMENT — MIND MAP
Address Binding → Compile time, Load time, Execution time
Logical vs Physical → Logical = CPU generated, Physical = actual RAM address
Swapping → Move process between RAM and disk
Contiguous Allocation → Fixed partition, Variable partition
Fragmentation → Internal (wasted inside) vs External (wasted outside)
Paging → Divide memory into fixed frames, process into pages
Segmentation → Divide process into logical segments
Virtual Memory → Run process larger than RAM using disk
Page Replacement → FIFO, LRU, Optimal algorithms

2.1 Address Binding

📖 Definition: Address binding is the process of mapping symbolic addresses (used in program) to actual physical memory addresses. It determines WHEN the final physical address is assigned to program instructions and data.
Type 1 — Compile Time Binding:
  • Physical address known at compile time
  • Absolute code is generated by compiler
  • Process must always load at same memory location
  • If starting address changes → must recompile
  • Example: Early DOS programs, .COM files
Type 2 — Load Time Binding:
  • Physical address determined when process is loaded into memory
  • Compiler generates relocatable code
  • If starting address changes → must reload (not recompile)
  • Example: Programs using relocatable addresses
Type 3 — Execution Time Binding (Runtime Binding):
  • Physical address determined during execution
  • Process can be moved in memory during execution
  • Requires hardware support (MMU — Memory Management Unit)
  • Used by most modern OS
  • Example: Modern Windows, Linux processes
Address Binding Timeline: Source Compile Object Link/Load Execution Code ──────► Time ──────► Code ──────► Time ──────► Time (symbols) (if compile (relocatable (if load (if runtime time binding) addresses) time binding) binding)
Fig 2.1 — Address Binding Stages

2.2 Logical vs Physical Address Space

📖 Logical Address (Virtual Address): Address generated by the CPU during program execution. It is what the program "sees" and uses. Also called virtual address.
📖 Physical Address: Actual address in physical memory (RAM). This is where data is actually stored. Seen only by memory hardware.
💡 Real-life Analogy:

Logical Address = Your House Number in a city map (what you tell others)

Physical Address = GPS coordinates of your house (actual location)

A mapping system (like GPS) converts house number → GPS coordinates, just like MMU converts logical → physical address.

Feature Logical Address Physical Address
Generated byCPUMMU (Hardware)
User visibleYesNo
Also calledVirtual addressReal address
RangeLogical address spacePhysical address space
BindingCompile/Load timeRuntime
MMU Translation:
  • Physical Address = Logical Address + Base Register (Relocation Register)
  • Example: Base Register = 14000, Logical Address = 346 → Physical Address = 14000 + 346 = 14346

2.3 Dynamic Loading

📖 Definition: A routine (function/module) is not loaded into memory until it is called. The entire program does NOT need to be in memory at start. Unused routines are never loaded.
✅ Advantages:
  • Better memory utilization — only needed routines loaded
  • Useful when large amounts of code rarely used (error handling)
  • No special OS support required — responsibility of programmer
💡 Example: A program has 10 error-handling routines. Without dynamic loading, all 10 are loaded even if only 1 is ever called. With dynamic loading, only the called routine is loaded — saving memory.

2.4 Swapping

📖 Definition: Swapping is a memory management technique where a process is temporarily moved (swapped out) from main memory to a backing store (disk), and later brought back (swapped in) into memory for continued execution.
💡 Real-life Analogy:

Think of RAM as your study table and hard disk as your bookshelf. When your table is full, you keep some books back on the shelf (swap out) and bring them back when needed (swap in).

Swapping Process Diagram
Fig 2.2 — Swapping between RAM and Disk
✅ Advantage: Total physical address space of all processes can exceed actual physical memory → more processes can run.
❌ Disadvantage: Swap time is mainly transfer time. If process is 100MB, swap-out + swap-in = 200MB transfer → very slow!

2.5 Contiguous Memory Allocation

📖 Definition: In contiguous memory allocation, each process occupies a single contiguous section of memory. Memory is divided into partitions for processes.

Fixed Partition (Static)

Fixed Partition Scheme:
  • Memory divided into fixed-size partitions at boot time
  • Each partition holds exactly one process
  • Degree of multiprogramming = number of partitions
  • Problem: Internal fragmentation if process smaller than partition

Variable Partition (Dynamic)

Variable Partition Scheme:
  • OS maintains table of free and allocated memory
  • Hole = block of available memory
  • When process arrives → allocate from a large enough hole

Memory Allocation Strategies

Strategy How it works Speed Fragmentation
First Fit Allocate first hole that is big enough Fastest Medium
Best Fit Allocate smallest hole that is big enough Slowest Least
Worst Fit Allocate largest available hole Slow Most

2.6 Fragmentation

📖 Definition: Fragmentation is the condition where memory space is available but cannot be used effectively. It is the waste of memory.
❌ Type 1 — Internal Fragmentation:

Memory allocated to a process is larger than required. Wasted space is INSIDE the allocated partition.

Example: Partition = 100KB, Process needs 82KB → 18KB wasted (internal)

Caused by: Fixed partitioning

❌ Type 2 — External Fragmentation:

Total free memory is enough but not contiguous. Free spaces scattered throughout memory, cannot accommodate process even though enough total space exists.

Example: Free = 30KB + 20KB + 10KB = 60KB, but process needs 50KB contiguous → fails!

Caused by: Variable partitioning

External Fragmentation Diagram
Fig 2.3 — External Fragmentation Example
✅ Solution — Compaction:

Shuffle memory contents to place all free memory together. Creates large block of free space. Possible only if binding is done at execution time.

2.7 Paging

📖 Definition: Paging is a memory management scheme that allows the physical address space of a process to be non-contiguous. Physical memory is divided into fixed-size blocks called frames. Logical memory is divided into blocks of same size called pages.
💡 Real-life Analogy:

Think of a notebook with numbered pages. You can write different parts of your notes on any page — they don't need to be next to each other. A table of contents (page table) tells you which page has which topic. Same way, paging maps logical pages to physical frames.

Key Concepts:
  • Page: Fixed-size block of logical memory (e.g., 4KB)
  • Frame: Fixed-size block of physical memory (same size as page)
  • Page Table: Maps page number to frame number
  • Page Number (p): Index into page table
  • Page Offset (d): Offset within the page
Address Translation:
  • Logical Address: (Page Number p, Offset d)
  • Physical Address: (Frame Number f, Offset d)
  • Page Size: 2n bytes → offset = n bits
  • Logical Address Space: 2m → page number = m-n bits
  • Example: Page size = 4KB = 212 → offset = 12 bits; 32-bit address → page number = 32-12 = 20 bits → 220 = 1M pages
Paging Address Translation Diagram
Fig 2.4 — Paging Address Translation
✅ Advantages of Paging:
  • No external fragmentation
  • Simple memory allocation
  • Easy to implement shared memory
  • Process does not need contiguous memory
❌ Disadvantages:
  • Internal fragmentation (last page may not be full)
  • Page table overhead (extra memory for page table)
  • Two memory accesses per data access (page table + actual data)

TLB — Translation Lookaside Buffer

📖 TLB: A fast hardware cache (associative memory) that stores recently used page-to-frame mappings. Speeds up paging by avoiding page table lookup in most cases.
Effective Access Time (EAT):
  • Formula: EAT = Hit Ratio × (TLB time + Memory time) + (1 - Hit Ratio) × (TLB time + 2 × Memory time)
  • Example: TLB time = 20ns, Memory time = 100ns, Hit ratio = 80%
  • EAT = 0.8 × (20+100) + 0.2 × (20+200) = 0.8 × 120 + 0.2 × 220 = 96 + 44 = 140ns

2.8 Segmentation

📖 Definition: Segmentation is a memory management scheme that supports the programmer's view of memory. A program is a collection of segments — each segment is a logical unit such as main program, function, object, stack, etc.
💡 Real-life Analogy:

Think of a book divided into chapters (Introduction, Theory, Examples, Index). Each chapter is a segment of different length. Segmentation divides a program into meaningful logical parts of different sizes.

Segmentation Address:
  • Logical address = (Segment number s, Offset d)
  • Segment Table: Each entry has Base (starting physical address) and Limit (length of segment)
  • Physical address = Base[s] + d (if d < Limit[s])
Segmentation Address Translation:
  • Logical Address: (Segment s, Offset d)
  • Step 1: Look up Segment Table[s] → get Base and Limit
  • Step 2: If d >= Limit → Trap (Segmentation Fault!)
  • Step 3: Physical Address = Base + d
Feature Paging Segmentation
Division basisFixed size (pages)Logical size (segments)
SizeEqual sizeVariable size
FragmentationInternal onlyExternal only
User visibleNoYes
Address format(page, offset)(segment, offset)
Table usedPage TableSegment Table

Segmentation with Paging

📖 Segmentation with Paging: Combines both techniques. Each segment is further divided into pages. Eliminates external fragmentation (from segmentation) while keeping logical view of program (from segmentation). Used in Intel x86 architecture.
Address Translation:
  • Logical address → (Segment, Page, Offset)
  • Segment table → base address of page table for that segment
  • Page table → frame number
  • Physical address = Frame + Offset

2.9 Virtual Memory Concept

📖 Definition: Virtual memory is a technique that allows the execution of processes that may not be completely in memory. The logical address space can be much larger than physical memory. It gives an illusion of very large memory.
💡 Real-life Analogy:

Imagine your laptop has only 8GB RAM but you can run programs that need 16GB. The OS uses your hard disk as extra "virtual" RAM. You see 16GB of memory even though only 8GB is physically available.

✅ Benefits of Virtual Memory:
  • Programs can be larger than physical memory
  • More programs can run simultaneously
  • Less I/O needed to load/swap processes
  • Efficient process creation (copy-on-write)
Virtual Memory Concept: Process View Physical Memory (RAM) Disk (Swap Space) ┌──────────┐ ┌──────────────────┐ ┌──────────────────┐ │ Page 0 │───────►│ Frame 3 (Page 0) │ │ │ │ Page 1 │ │ Frame 7 (Page 2) │ │ Page 1 (on disk) │ │ Page 2 │───────►│ Frame 1 (Page 4) │ │ Page 3 (on disk) │ │ Page 3 │ │ ... │ │ │ │ Page 4 │───────►│ │ └──────────────────┘ └──────────┘ └──────────────────┘ (16GB virtual) (8GB physical) Pages not in RAM are on disk!
Fig 2.5 — Virtual Memory Mapping

2.10 Demand Paging

📖 Definition: Demand paging is a type of virtual memory management where pages are loaded into memory only when they are demanded (accessed) — not all at once. If page is not in memory, a page fault occurs and the OS loads the page.
Page Fault Handling Steps:
  • Step 1: CPU accesses a page → check page table
  • Step 2: If valid bit = 0 → Page Fault! Trap to OS
  • Step 3: OS finds page on disk (backing store)
  • Step 4: Find free frame in memory
  • Step 5: Bring page from disk to frame
  • Step 6: Update page table (valid bit = 1)
  • Step 7: Restart the instruction that caused page fault
Effective Access Time (EAT) with Demand Paging:
  • Formula: EAT = (1 - p) × memory access time + p × page fault time
  • Where: p = page fault rate (0 ≤ p ≤ 1); p = 0 → no page faults (ideal); p = 1 → every access is a page fault
  • Example: Memory access = 200ns, Page fault time = 8ms = 8,000,000ns, p = 0.001 (1 in 1000)
  • EAT = 0.999 × 200 + 0.001 × 8,000,000 = 199.8 + 8000 = 8199.8 ns ≈ 8.2 microseconds

2.11 Page Replacement & Algorithms

📖 Page Replacement: When a page fault occurs and no free frame is available, the OS must replace an existing page in memory with the required page. The algorithm decides WHICH page to replace.
⚠️ Goal: Minimize the number of page faults. Choose the page to replace that will result in the fewest future page faults.

Algorithm 1 — FIFO (First In First Out)

📖 FIFO: Replace the page that has been in memory the longest (first one that came in is first to go out).
💡 Solved Example:

Reference String: 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2
Frames = 3

Ref: 7 0 1 2 0 3 0 4 2 3 0 3 2 ──────────────────────────────────────────────────────── F1: 7 7 7 2 2 2 2 4 4 4 0 0 0 F2: - 0 0 0 0 3 3 3 2 2 2 2 2 F3: - - 1 1 1 1 0 0 0 3 3 3 3 ──────────────────────────────────────────────────────── PF: ✓ ✓ ✓ ✓ - ✓ ✓ ✓ ✓ ✓ ✓ - - Total Page Faults = 9
❌ Belady's Anomaly: For FIFO, increasing number of frames may actually increase page faults! (This is the anomaly.)

Algorithm 2 — Optimal Page Replacement (OPT)

📖 OPT/MIN: Replace the page that will NOT be used for the longest period of time in the future. Gives minimum page faults — but requires future knowledge (not practically implementable). Used as benchmark.
💡 Solved Example:

Reference String: 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2 | Frames = 3

Strategy: Replace page used FARTHEST in future Ref: 7 0 1 2 0 3 0 4 2 3 0 3 2 ──────────────────────────────────────────────────────── F1: 7 7 7 2 2 2 2 2 2 2 2 2 2 F2: - 0 0 0 0 0 0 4 4 4 0 0 0 F3: - - 1 1 1 3 3 3 3 3 3 3 3 ──────────────────────────────────────────────────────── PF: ✓ ✓ ✓ ✓ - ✓ - ✓ - - ✓ - - Total Page Faults = 6 (Minimum possible!)

Algorithm 3 — LRU (Least Recently Used)

📖 LRU: Replace the page that has not been used for the longest time in the past. Based on the principle of locality — recently used pages will likely be used again soon.
💡 Solved Example:

Reference String: 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2 | Frames = 3

Strategy: Replace page NOT used for longest time Ref: 7 0 1 2 0 3 0 4 2 3 0 3 2 ──────────────────────────────────────────────────────── F1: 7 7 7 2 2 2 0 0 0 3 3 3 2 F2: - 0 0 0 0 3 3 3 2 2 2 2 2 * F3: - - 1 1 1 1 1 4 4 4 0 0 0 ──────────────────────────────────────────────────────── PF: ✓ ✓ ✓ ✓ - ✓ ✓ ✓ ✓ ✓ ✓ - - Total Page Faults = 8
Algorithm Replace Page Faults Practical? Anomaly
FIFO Oldest page in memory Most Yes Belady's
OPT Farthest future use Minimum No None
LRU Least recently used Medium Yes None
💿 Chapter 3 — Device Management
📊 DEVICE MANAGEMENT — MIND MAP
Disk Structure → Platter, Track, Sector, Cylinder, Head
Disk Formatting → Low-level, Partition, Logical formatting
Disk Scheduling → Minimize seek time → FCFS, SSTF, SCAN, C-SCAN, LOOK, C-LOOK
Seek Time → Time to move head to correct track
Rotational Latency → Time for sector to rotate under head
Transfer Time → Time to actually transfer data

3.1 Disk Structure

📖 Definition: A magnetic disk consists of circular platters coated with magnetic material. Data is stored on the surface of these platters and read/written by read-write heads.
Key Disk Components:
  • Platter: Circular disk coated with magnetic material. A disk may have multiple platters stacked together.
  • Track: Concentric circles on each platter surface. Each platter has hundreds of tracks.
  • Sector: Smallest unit of storage. Each track is divided into sectors (typically 512 bytes or 4KB).
  • Cylinder: Set of tracks at same position on all platters (same arm position across all surfaces).
  • Read-Write Head: Reads/writes data. One head per platter surface. All heads move together.
  • Disk Arm: Moves heads to correct track position.
Disk Structure Diagram
Fig 3.1 — Disk Structure
Disk Access Time:
  • Formula: Total Access Time = Seek Time + Rotational Latency + Transfer Time
  • Seek Time: Time to move arm to correct track
  • Rotational Latency: Time for sector to rotate under head
  • Transfer Time: Time to read/write actual data
  • Average Rotational Latency: 1/2 × rotation time (sector is halfway around on average)

3.2 Disk Formatting

📖 Definition: Disk formatting is the process of preparing a disk for use by an operating system. It involves multiple stages to make the disk ready for data storage.
Stage 1 — Low-Level Formatting (Physical Formatting):
  • Divides disk into sectors that disk controller can read/write
  • Each sector = Header + Data area + Trailer (ECC)
  • Done at manufacturing time
  • ECC (Error Correcting Code) helps detect/correct errors
Stage 2 — Partition (Dividing Disk):
  • Divides disk into one or more groups of cylinders
  • Each partition treated as separate disk by OS
  • Example: C: drive, D: drive on Windows
  • Partition info stored in Partition Table
Stage 3 — Logical Formatting (High-Level Formatting):
  • Creates file system on partition
  • Stores initial file-system data structures onto disk
  • Creates FAT/inode structures, root directory, free space list
  • Example: Format as NTFS, FAT32, ext4
⚠️ Boot Block: To boot OS, the computer first runs a tiny bootstrap program stored in ROM/EEPROM. This loads the full bootstrap loader from a fixed location on disk (Boot Block / MBR — Master Boot Record).

3.3 Disk Scheduling Algorithms

📖 Definition: Disk scheduling algorithms determine the order in which disk I/O requests are serviced to minimize seek time and maximize disk throughput.
⚠️ Common Example Setup (used in all algorithms below):

Head starts at: 53
Request Queue: 98, 183, 37, 122, 14, 124, 65, 67
Disk range: 0 to 199

Algorithm 1 — FCFS (First Come First Served)

📖 FCFS: Service requests in the order they arrive. Simplest algorithm — no starvation but may result in a lot of head movement.
💡 FCFS Example:
Head starts at 53 Order: 53→98→183→37→122→14→124→65→67 Movement: 53 → 98 = 45 98 → 183 = 85 183 → 37 = 146 37 → 122 = 85 122 → 14 = 108 14 → 124 = 110 124 → 65 = 59 65 → 67 = 2 ───────────── Total = 640 cylinders
❌ Disadvantage: Does not optimize head movement. Wild swings across disk → poor performance.

Algorithm 2 — SSTF (Shortest Seek Time First)

📖 SSTF: Service the request closest to current head position. Like SJF for disks — always pick nearest request.
💡 SSTF Example:
Head starts at 53 Queue: 98, 183, 37, 122, 14, 124, 65, 67 Step 1: Closest to 53 → 65 (dist=12) → move to 65 Step 2: Closest to 65 → 67 (dist=2) → move to 67 Step 3: Closest to 67 → 37 (dist=30) → move to 37 Step 4: Closest to 37 → 14 (dist=23) → move to 14 Step 5: Closest to 14 → 98 (dist=84) → move to 98 Step 6: Closest to 98 → 122 (dist=24) → move to 122 Step 7: Closest to 122→ 124 (dist=2) → move to 124 Step 8: Closest to 124→ 183 (dist=59) → move to 183 ───────────────────────────────────────── Order: 53→65→67→37→14→98→122→124→183 Total = 236 cylinders (Much better than FCFS!)
❌ Disadvantage: Starvation possible — requests far from current head may wait forever if closer requests keep arriving.

Algorithm 3 — SCAN (Elevator Algorithm)

📖 SCAN: Disk arm moves in one direction servicing all requests until it reaches end of disk, then reverses direction. Like an elevator going up then down.
💡 SCAN Example (moving towards higher):
Head starts at 53, moving towards higher numbers Queue: 98, 183, 37, 122, 14, 124, 65, 67 Moving UP (towards 199): 53→65→67→98→122→124→183→199 (end) Then reverses, moving DOWN: 199→37→14→0 Order: 53→65→67→98→122→124→183→37→14 Total = (183-53) + (183-14) = 130 + 169 = 299 cylinders Visual: 0 14 37 53 65 67 98 122 124 183 199 |────|────|───|───|──|─────|───|───|───────|──────| ◄────────── │ ─────────────────────► Start
✅ No starvation — every request eventually served.

Algorithm 4 — C-SCAN (Circular SCAN)

📖 C-SCAN: Like SCAN but only services requests while moving in ONE direction (usually high to low). When it reaches end, it jumps back to beginning without servicing any requests on return. More uniform wait time than SCAN.
💡 C-SCAN Example:
Head at 53, moving towards higher numbers Queue: 98, 183, 37, 122, 14, 124, 65, 67 Moving UP: 53→65→67→98→122→124→183→199 Jump to 0 (no service on return) Moving UP again: 0→14→37 Order: 53→65→67→98→122→124→183→199→0→14→37 Visual: 0 14 37 53 65 67 98 122 124 183 199 |────|────|───|───|──|─────|───|───|───────|──────| ──►──┤ ┤ ├──────────────────────────────► (return) (continue here next cycle)

Algorithm 5 — LOOK

📖 LOOK: Like SCAN, but arm only goes as far as the last request in each direction — does NOT go all the way to end of disk. More efficient than SCAN.
💡 LOOK Example:
Head at 53, moving towards higher numbers Queue: 98, 183, 37, 122, 14, 124, 65, 67 Moving UP (only to 183, NOT to 199): 53→65→67→98→122→124→183 Reverse, moving DOWN (only to 14, NOT to 0): 183→37→14 Order: 53→65→67→98→122→124→183→37→14 Total = (183-53) + (183-14) = 130+169 = 299 cylinders (Same movement but doesn't go to 199 or 0)

Algorithm 6 — C-LOOK

📖 C-LOOK: Combination of C-SCAN and LOOK. Services requests in one direction only up to last request, then jumps back to first request (not beginning of disk). Most efficient in practice.
💡 C-LOOK Example:
Head at 53, moving towards higher numbers Queue: 98, 183, 37, 122, 14, 124, 65, 67 Moving UP (to last request = 183): 53→65→67→98→122→124→183 Jump back to FIRST request (14), not 0: 183 → 14 (jump, no service) Continue UP: 14→37 Order: 53→65→67→98→122→124→183→14→37 Total = (183-53) + (183-14) = 130+169 = 299 cylinders
Algorithm Movement Starvation Total (Example) Best For
FCFS Request order No 640 Light load
SSTF Nearest first Yes 236 Performance
SCAN Both directions, full sweep No 299 Heavy load
C-SCAN One direction, jump back No ~382 Uniform wait
LOOK Both directions, to last request No 299 Better than SCAN
C-LOOK One direction, jump to first No ~322 Best practical
Ready for Exam? Sab padh liya? Ab Quick Revision karo — formulas, key points aur common mistakes ek jagah! Quick Revision Karo →
Quick Revision — Last Minute Exam Prep!
📌 How to use: Read this 10 minutes before exam. All key points, formulas and exam tricks in one place!

🔒 Chapter 1 — Deadlocks

📖 What is Deadlock?
A situation where a set of processes are permanently blocked — each process holds a resource and waits for a resource held by another. No process can ever proceed.

💡 Analogy: Two cars facing each other on a single-lane bridge. Neither can move forward or backward — permanent block!
🔑 4 Necessary Conditions (ALL must hold — Remember "MHNC"):

Mutual Exclusion — Resource used by only ONE process at a time
Hold & Wait — Process holds resources while waiting for more
No Preemption — Cannot forcibly take resource from a process
Circular Wait — P1 waits for P2, P2 waits for P3, P3 waits for P1

⚠️ Exam Trick: Break any ONE condition → Deadlock CANNOT occur!
✅ Deadlock Handling Strategies (4 approaches):

1. Prevention: Ensure at least one condition never holds (most restrictive)
2. Avoidance: Use Banker's Algorithm — check before granting (needs max info)
3. Detection + Recovery: Allow deadlock, then detect & kill/preempt
4. Ignore: Ostrich Algorithm — assume it won't happen (used in most OS!)
🏦 Banker's Algorithm — Key Formula:

Need[i][j] = Max[i][j] − Allocation[i][j]

Safety Algorithm Steps:
1. Set Work = Available, Finish[i] = false for all
2. Find process i where Need[i] ≤ Work → execute it
3. Work = Work + Allocation[i], Finish[i] = true
4. Repeat until all Finish[i] = true → SAFE STATE ✓

Example: Available=[3,3,2], P2 Need=[0,1,0] ≤ [3,3,2] → P2 runs first!
💡 RAG (Resource Allocation Graph) — Quick Rules:

• Cycle + Single instance per resource = DEADLOCK always
• Cycle + Multiple instances per resource = POSSIBLE deadlock only ⚠️
• No cycle = NO DEADLOCK

Recovery Methods:
• Process Termination — Kill process(es) to break circular wait
• Resource Preemption — Forcibly take resources, rollback to checkpoint
DEADLOCK QUICK FORMULA: Need = Max − Allocation Safe State → safe sequence exists where all processes finish Prevention: Break any 1 of 4 conditions (MHNC) Avoidance: Banker's Algorithm (needs max resource declaration) Detection: Wait-for Graph (single) | Detection Algorithm (multiple instances)

💾 Chapter 2 — Memory Management

📖 Address Binding — When does CPU know the actual address?

1. Compile Time: Address fixed at compile → program always at same location (Old)
2. Load Time: Address decided when loaded → can be placed anywhere (Better)
3. Execution Time: Address changes during runtime using MMU (Modern OS — Best)

💡 Analogy: Compile time = pen on paper. Load time = pencil. Execution time = GPS (updates live)!
📐 Paging — Key Formulas (EXAM FAVOURITE!):

Logical Address = Page No. (p) | Offset (d)
Physical Address = Frame No. (f) | Offset (d)

Page bits = log₂(number of pages)
Offset bits = log₂(page size)
Frame bits = log₂(number of frames)

Physical = Base + Logical
TLB EAT = h×(t+m) + (1-h)×(t+2m)
h=hit ratio, t=TLB time, m=memory access time
📦 Fragmentation — Never Confuse!

Internal Fragmentation: Wasted space INSIDE partition (fixed size partitions)
→ Solution: Paging doesn't fully fix it (last page may waste)

External Fragmentation: Wasted space BETWEEN partitions (variable partitions)
→ Solution: Paging ELIMINATES external fragmentation ✓
→ Also: Compaction (costly), Segmentation + Paging

💡 Trick: Internal = INSIDE the block | External = OUTSIDE between blocks
✅ Page Replacement Algorithms — Quick Compare:
AlgorithmRulePage Faults*Belady's?
FIFOReplace oldest page9YES ✗
LRUReplace least recently used8NO ✓
OPTReplace farthest future use6 (min)NO ✓
*For reference string: 7,0,1,2,0,3,0,4,2,3,0,3,2 | 3 frames
OPT = not practical (needs future knowledge) | Used only as benchmark
Belady's Anomaly = FIFO only: more frames → MORE page faults (counterintuitive!)
💡 Virtual Memory — Quick Points:

What: Illusion of more RAM using disk as extended memory
How: Demand Paging — load page only when accessed (lazy loading)
Page Fault: Page not in RAM → OS loads from disk → update page table → restart instruction

EAT formula: EAT = (1−p)×m + p×page_fault_time
p = page fault rate | m = memory access time

Thrashing: Too many page faults → more time swapping than executing → CPU utilization drops sharply!
PAGING FORMULAS: Logical Address bits = log₂(pages) + log₂(page size) Physical Address bits = log₂(frames) + log₂(page size) Physical = Base + Logical (for simple relocation) TLB EAT = h×(t+m) + (1-h)×(t+2m) PAGE REPLACEMENT (ref: 7,0,1,2,0,3,0,4,2,3,0,3,2 | 3 frames): FIFO → 9 page faults (oldest replaced) — Belady's possible! OPT → 6 page faults (farthest future) — MINIMUM, not practical LRU → 8 page faults (least recent) — no Belady's Belady's Anomaly → FIFO ONLY: more frames = more faults (sometimes)

💿 Chapter 3 — Device / Disk Management

📖 Disk Access Time Components:

Total Access Time = Seek Time + Rotational Latency + Transfer Time

Seek Time: Arm moves to correct track (~5-10ms) — DOMINATES
Rotational Latency: Disk rotates to correct sector (~4ms avg)
Transfer Time: Actual read/write of data (small, ~1ms)

💡 Goal: Minimize seek time = minimize total access time!
🗂️ Disk Formatting — 3 Stages:

Stage 1 — Low-level (Physical): Factory divides disk into tracks & sectors. Adds headers and ECC (error correction codes).
Stage 2 — Partition: User divides disk into logical drives (C:, D:, E:). Each partition is independent.
Stage 3 — Logical (High-level): Creates file system (NTFS, ext4, FAT32). Defines how files are stored and accessed.

💡 Analogy: Raw land → divide into plots → build houses on each plot!
✅ Disk Scheduling Algorithms — Quick Compare (Remember "FSSCLC"):
AlgorithmWorks ByStarvation?Best For
FCFSArrival orderPossibleFairness (slow)
SSTFClosest request firstYES ✗Performance
SCANElevator both waysNOGeneral use
C-SCANOne direction, jump backNOUniform wait
LOOKSCAN but stops at last reqNOBetter SCAN
C-LOOKC-SCAN but stops at last reqNOBest practical
⚠️ SSTF vs SCAN: SSTF = fastest but can starve far requests | SCAN = elevator (never gets stuck)
SCAN vs LOOK: SCAN goes to END of disk | LOOK stops at last REQUEST (more efficient)
C-SCAN vs C-LOOK: C-SCAN jumps from disk end to 0 | C-LOOK jumps from last request to first request
DISK SCHEDULING EXAMPLE (Head=53, Queue=98,183,37,122,14,124,65,67): FCFS → 53→98→183→37→122→14→124→65→67 = 640 cylinders SSTF → 53→65→67→37→14→98→122→124→183 = 236 cylinders (min!) SCAN → 53→65→67→98→122→124→183→37→14 = 299 cylinders C-SCAN → 53→65→67→98→122→124→183→14→37 = ~382 cylinders REMEMBER: SSTF = Best performance BUT starvation possible! C-LOOK = Best practical choice in real systems

⚠️ Common Exam Mistakes

❌ Deadlocks:
  • Don't say "any 1 condition causes deadlock" — ALL 4 must hold simultaneously
  • RAG cycle + multiple instances ≠ definite deadlock (only possible)
  • In Banker's: Need = Max − Allocation (NOT the other way!)
  • Safe state ≠ no deadlock currently (just means future is safe)
❌ Memory Management:
  • Internal fragmentation → fixed partition size (NOT variable)
  • External fragmentation → variable partition size (NOT fixed)
  • Paging eliminates EXTERNAL fragmentation (still has small internal)
  • Belady's Anomaly → FIFO only, NOT LRU or OPT
  • OPT is NOT practical — needs future knowledge (used as benchmark only)
❌ Disk Scheduling:
  • SSTF ≠ SCAN — SSTF can starve far requests, SCAN cannot
  • SCAN goes to END of disk | LOOK only goes to LAST REQUEST (don't mix up!)
  • C-SCAN jumps from end to 0 (no service on return trip)
  • C-LOOK jumps from last request to first request (not to 0)

🎯 Last Minute Exam Tips

✅ For Numerical Questions:
  • Banker's: Always draw table with Allocation, Max, Need, Available columns
  • Page Replacement: Show step-by-step frame contents, mark page faults with ✓
  • Disk Scheduling: Show movement order AND total cylinders moved
  • Address bits: Always use log₂(n) formula — show working clearly!
✅ For Theory Questions:
  • Always give definition + example (2 marks minimum formula)
  • Use comparison tables wherever possible — examiners love it!
  • Draw RAG, process state diagrams, paging translation diagram
  • Write formulas before solving numericals
  • 2 Mark tip: Definition (1 line) + Example (1 line) = Full marks ✓
🔥 MUST REVISE (High Priority Topics):
  1. Banker's Algorithm — Need matrix + Safety sequence (GUARANTEED numerical)
  2. Page Replacement — FIFO/LRU/OPT step-by-step (GUARANTEED)
  3. Disk Scheduling — SCAN/SSTF/C-SCAN calculation
  4. Paging — Logical/Physical address bits calculation
  5. 4 Deadlock conditions + RAG cycle rules
  6. Belady's Anomaly — definition + which algorithm
  7. TLB purpose + EAT formula
  8. Internal vs External Fragmentation difference
💡 Power Mnemonics:
  • Deadlock Conditions: MHNC (Mutual Exclusion, Hold & Wait, No Preemption, Circular Wait)
  • Disk Algorithms: FSSCLC (FCFS, SSTF, SCAN, C-SCAN, LOOK, C-LOOK)
  • Page Replacement: FOL (FIFO=oldest, OPT=future, LRU=recent)
  • Address Binding: CLE (Compile, Load, Execution time)
  • Fragmentation: Internal=INSIDE block | External=BETWEEN blocks
Important Questions — PYQ Based
📌 Note: These questions are based on actual Mid Semester Test papers (24CST-205 & 23CST-303, 2025-26). Click on any question to see the clear, point-wise answer.

Section A — 2 Marks Questions (Short Answer)

2M PYQ 2025-26
Q1. List two methods of deadlock recovery.
Answer:
1. Process Termination: Kill one or more deadlocked processes to break the circular wait. Either abort all deadlocked processes (costly) or abort one at a time until deadlock is resolved.
2. Resource Preemption: Forcibly take resources from some processes and give them to others. The preempted process is rolled back to a safe state (checkpointing).
2M PYQ 2025-26
Q2. Explain the working principle of SCAN disk scheduling.
SCAN (Elevator Algorithm): The disk arm moves in one direction (e.g., lower → higher track numbers), servicing all requests along the way. When it reaches the end (or last request), it reverses direction and services requests on the way back.

Example: Head at 53, Queue: 65, 37, 98, 14 (moving up):
Order: 53 → 65 → 98 → 37 → 14
Key Point: No starvation — every request is eventually served. Works like a building elevator.
2M PYQ 2025-26
Q3. Which page replacement algorithm suffers from Belady's anomaly and why?
Algorithm: FIFO (First In First Out) page replacement suffers from Belady's Anomaly.

Why? In FIFO, the oldest page is replaced first. When more frames are added, the replacement pattern changes unpredictably — more frames can cause MORE page faults instead of fewer. This is counterintuitive.

Note: This anomaly does NOT occur in LRU and OPT (they are "stack algorithms").
2M PYQ 2025-26
Q4. Explain why the Optimal page replacement algorithm cannot be implemented in practice.
Optimal (OPT) Algorithm: Replaces the page that will not be used for the longest time in the future.

Why not practical? It requires future knowledge — the OS must know in advance which pages will be referenced next. This is impossible in real systems since future process behavior cannot be predicted at runtime.

Use: OPT is used only as a benchmark to compare the performance of other page replacement algorithms.
2M PYQ 2025-26
Q5. Explain whether the presence of a cycle in a Resource Allocation Graph always indicates a deadlock. Give an example.
Answer: NO — not always!

Rule:
• If each resource type has only 1 instance → Cycle = Deadlock ✅
• If resource type has multiple instances → Cycle = Possible deadlock only ⚠️

Example (cycle but no deadlock):
R1 has 2 instances. P1 holds R1, waits for R2. P3 also holds R1.
P3 can finish → releases R1 → P1 gets it → No deadlock despite cycle.
2M PYQ 2025-26
Q6. Define the purpose of TLB in memory management.
TLB (Translation Lookaside Buffer): A small, super-fast hardware cache that stores recently used page-to-frame mappings from the page table.

Purpose:
1. Reduces address translation time (avoids 2 memory accesses)
2. On TLB hit → get frame number in just 1 access ✓
3. Significantly improves overall memory performance

EAT formula: EAT = h×(t+m) + (1−h)×(t+2m)   [h=hit ratio, t=TLB time, m=memory time]
2M PYQ 2025-26
Q7. Consider a logical address space of 64 pages of 1024 words each, mapped onto a physical memory of 32 frames. (a) How many bits in the logical address? (b) How many bits in the physical address?
Given: 64 pages, 1024 words/page, 32 frames

(a) Logical Address:
• Page bits = log₂(64) = 6 bits
• Offset bits = log₂(1024) = 10 bits
Total Logical Address = 16 bits

(b) Physical Address:
• Frame bits = log₂(32) = 5 bits
• Offset bits = 10 bits (same as page size)
Total Physical Address = 15 bits
2M PYQ 2025-26
Q8. Given a disk queue: {95, 180, 34, 119, 11, 123, 62, 64}. Initial head position = 50. Calculate the total head movement using FCFS.
FCFS: Serve requests in the exact order they arrive.
Order: 50 → 95 → 180 → 34 → 119 → 11 → 123 → 62 → 64 Movement: |95-50| = 45 |180-95| = 85 |34-180| = 146 |119-34| = 85 |11-119| = 108 |123-11| = 112 |62-123| = 61 |64-62| = 2 ────────────── Total = 644 cylinders
Answer: 644 cylinders
2M PYQ 2025-26
Q9. Explain with example: (a) Worst Fit (b) Best Fit
(a) Worst Fit: Allocates the process to the largest available free block. Leaves maximum remaining free space — useful to leave big holes for future large processes.

(b) Best Fit: Allocates the process to the smallest block that is still large enough. Minimizes wasted space but tends to create many tiny holes.

Example: Free blocks = [100KB, 300KB, 500KB], Process = 200KB
• Best Fit → uses 300KB block → 100KB leftover
• Worst Fit → uses 500KB block → 300KB leftover
2M PYQ 2025-26
Q10. Define Belady's Anomaly.
Belady's Anomaly: The counterintuitive phenomenon where increasing the number of page frames results in MORE page faults (instead of fewer) for the same reference string.

Occurs in: FIFO page replacement only
Does NOT occur in: LRU, OPT (stack-based algorithms)

Classic Example: Reference string: 1,2,3,4,1,2,5,1,2,3,4,5
3 frames → 9 faults  |  4 frames → 10 faults ← More frames, more faults!
2M Expected
Q11. What is Deadlock? State any two necessary conditions for deadlock.
Deadlock: A situation where a set of processes are permanently blocked — each process holds a resource and waits for a resource held by another process. No process can ever proceed.

Two necessary conditions:
1. Mutual Exclusion: At least one resource is non-shareable — only one process can use it at a time.
2. Hold and Wait: A process holds at least one resource and waits to acquire more resources held by other processes.
2M Expected
Q12. Define Safe State. How is it related to deadlock avoidance?
Safe State: A state is safe if the system can allocate resources to each process in some order (safe sequence) such that all processes can eventually complete without deadlock.

Relation: Deadlock avoidance ensures the system never moves into an unsafe state. Before granting any resource request, OS checks if the new state is safe. If unsafe → request is denied temporarily.
2M Expected
Q13. Differentiate between Logical Address and Physical Address.
Logical AddressPhysical Address
Generated by CPUGenerated by MMU hardware
Visible to programmerNot visible to user/program
Also called Virtual AddressAlso called Real Address
Mapped using page tableActual location in RAM
2M Expected
Q14. What is Internal Fragmentation and External Fragmentation?
Internal Fragmentation: Memory allocated to a process is MORE than required. Wasted space is inside the allocated partition.
Example: Partition = 100KB, Process = 82KB → 18KB wasted inside.

External Fragmentation: Total free memory is enough but NOT contiguous. Cannot accommodate a process even though enough total space exists.
Example: Free = 20KB + 15KB + 10KB = 45KB but process needs 40KB contiguous → fails.
2M Expected
Q15. What is Paging? What is the role of a Page Table?
Paging: A memory management scheme that divides physical memory into fixed-size blocks called frames and logical memory into same-size blocks called pages. Eliminates external fragmentation.

Page Table: A data structure that maps each page number (logical) to a frame number (physical). When CPU generates a logical address (page, offset), the page table finds the corresponding frame to form the physical address.

Section B — 5 Marks Questions (Long Answer)

5M PYQ 2025-26
Q16. Consider a logical address space of 16 pages of 1024 bytes each, mapped into memory of 128 frames. (i) Pictorially represent the logical and physical address. (ii) Compute number of bits in the logical address. (iii) Compute number of bits in the physical address.
Given: Pages = 16, Page Size = 1024 bytes, Frames = 128

(ii) Logical Address Bits:
• Page number bits = log₂(16) = 4 bits
• Offset bits = log₂(1024) = 10 bits
• Total Logical Address = 4 + 10 = 14 bits

(iii) Physical Address Bits:
• Frame number bits = log₂(128) = 7 bits
• Offset bits = 10 bits (same as page size)
• Total Physical Address = 7 + 10 = 17 bits

(i) Pictorial Representation:
Logical Address (14 bits): ┌──────────────┬────────────────────────────┐ │ Page No (4) │ Offset (10 bits) │ └──────────────┴────────────────────────────┘ Physical Address (17 bits): ┌──────────────────────┬────────────────────────────┐ │ Frame No (7 bits) │ Offset (10 bits) │ └──────────────────────┴────────────────────────────┘
5M PYQ 2025-26
Q17. Suggest the trade-offs between deadlock prevention and deadlock avoidance strategies.
Deadlock Prevention: Ensures at least one of the four necessary conditions cannot hold — so deadlock is structurally impossible.

Deadlock Avoidance: Allows conditions to exist but uses algorithms (like Banker's) to ensure the system never enters an unsafe state.

AspectPreventionAvoidance
ApproachEliminate one conditionMonitor & control allocations
Resource UtilizationPoor (resources wasted)Better utilization
OverheadLowHigh (safety check per request)
FlexibilityVery restrictiveMore flexible
Prior Knowledge NeededNoYes (max resource needs)
ExampleRequest all at onceBanker's Algorithm
Conclusion: Prevention is simpler but wastes resources. Avoidance is smarter but needs advance information about each process's maximum resource needs.
5M PYQ 2025-26
Q18. Why do we need Paging in Operating System? Explain its advantages and disadvantages.
Why Paging? Paging solves the problem of external fragmentation in contiguous memory allocation. It allows physical memory to be allocated non-contiguously while the process sees a continuous logical address space.

✅ Advantages:
1. No external fragmentation — frames can be anywhere in RAM
2. Simple memory management — fixed size makes allocation easy
3. Enables virtual memory — processes can be larger than physical RAM
4. Easy protection — each page can have read/write/execute flags
5. Sharing is easy — shared pages (shared libraries) map to same frame

❌ Disadvantages:
1. Internal fragmentation — last page may not fill completely
2. Page table overhead — large programs need large page tables consuming extra memory
3. Slower access without TLB — requires 2 memory accesses per instruction
5M PYQ 2025-26
Q19. A disk drive has 400 cylinders (0 to 399). Current head = 143. Previous request = 125. Queue (FIFO): 86, 147, 312, 91, 177, 48, 309, 222, 175, 130. Find total head movement for: (1) SSTF (2) SCAN (3) C-SCAN.
Head at 143, moving towards HIGHER (since previous=125 < current=143)
Sorted queue: 48, 86, 91, 130, 147, 175, 177, 222, 309, 312

1. SSTF (Shortest Seek Time First): Serve the closest request first from current head position. Order: 143→147→130→91→86→48→175→177→222→309→312 = 4+17+39+5+38+127+2+45+87+3 = 367 cylinders 2. SCAN (moving towards higher, goes to disk end): Order: 143→147→175→177→222→309→312→399→130→91→86→48 = 4+28+2+45+87+3+87+269+39+5+38 = 607 cylinders 3. C-SCAN (one direction only, jumps to 0): Order: 143→147→175→177→222→309→312→399→0→48→86→91→130 = 4+28+2+45+87+3+87+399+48+38+5+39 = 785 cylinders
SSTF = 367 | SCAN = 607 | C-SCAN = 785 cylinders
5M Expected
Q20. Explain Banker's Algorithm with example. Given: 3 processes P1, P2, P3 and 3 resource types A, B, C. Available = [3,3,2]. Allocation: P1=[1,0,0], P2=[0,1,0], P3=[1,1,1]. Max: P1=[3,2,2], P2=[0,2,0], P3=[2,2,2]. Find safe sequence.
Step 1 — Calculate Need = Max − Allocation:
ProcessAlloc A,B,CMax A,B,CNeed A,B,C
P11,0,03,2,22,2,2
P20,1,00,2,00,1,0
P31,1,12,2,21,1,1
Step 2 — Safety Algorithm:
Work = [3,3,2], Finish = [F, F, F]

• P2: Need=[0,1,0] ≤ [3,3,2] ✓ → Work=[3,4,2], Finish[P2]=T
• P3: Need=[1,1,1] ≤ [3,4,2] ✓ → Work=[4,5,3], Finish[P3]=T
• P1: Need=[2,2,2] ≤ [4,5,3] ✓ → Work=[5,5,3], Finish[P1]=T

✅ Safe Sequence: P2 → P3 → P1 (System is SAFE)
5M Expected
Q21. Reference string: 7,0,1,2,0,3,0,4,2,3,0,3,2 with 3 frames. Apply FIFO and OPT page replacement. Find total page faults for each.
FIFO — Replace oldest page in memory:
Ref: 7 0 1 2 0 3 0 4 2 3 0 3 2 F1: 7 7 7 2 2 2 2 4 4 4 0 0 0 F2: - 0 0 0 0 3 3 3 2 2 2 2 2 F3: - - 1 1 1 1 0 0 0 3 3 3 3 PF: ✓ ✓ ✓ ✓ - ✓ ✓ ✓ ✓ ✓ ✓ - - Total FIFO Page Faults = 9

OPT — Replace page used farthest in future:
Ref: 7 0 1 2 0 3 0 4 2 3 0 3 2 F1: 7 7 7 2 2 2 2 2 2 2 2 2 2 F2: - 0 0 0 0 0 0 4 4 4 0 0 0 F3: - - 1 1 1 3 3 3 3 3 3 3 3 PF: ✓ ✓ ✓ ✓ - ✓ - ✓ - - ✓ - - Total OPT Page Faults = 6 (Minimum possible!)