Operating System

Unit 2 — Memory & Device Management

📚 Chapters

🔒 Chapter 1 — Deadlocks
📊 DEADLOCKS — MIND MAP
Deadlock → Circular wait where processes hold resources & wait for others
4 Conditions → Mutual Exclusion, Hold & Wait, No Preemption, Circular Wait
Prevention → Eliminate any one of the 4 conditions
Avoidance → Banker's Algorithm, Resource Allocation Graph
Detection → Wait-for Graph, Detection Algorithm
Recovery → Process termination, Resource preemption

1.1 What is Deadlock?

📖 Definition: A deadlock is a situation where a set of processes are blocked because each process is holding a resource and waiting for a resource held by another process in the set. No process can proceed — they are all stuck forever.
💡 Real-life Example — Traffic Jam:

Imagine a 4-way intersection where:

  • Car A is blocking Car B's path and waiting for Car C to move
  • Car B is blocking Car C's path and waiting for Car D to move
  • Car C is blocking Car D's path and waiting for Car A to move
  • Car D is blocking Car A's path and waiting for Car B to move

Result → Nobody moves! This circular dependency = Deadlock

💡 OS Example:

Process P1 holds Resource R1 and needs R2.
Process P2 holds Resource R2 and needs R1.
→ Both are waiting for each other → Deadlock!

1.2 Conditions for Deadlock (Coffman Conditions)

⚠️ Key Point: ALL four conditions must hold simultaneously for deadlock to occur. Eliminating even ONE condition prevents deadlock.
1. Mutual Exclusion

At least one resource must be non-shareable — only one process can use it at a time.

Example: A printer can only be used by one process at a time.

2. Hold and Wait

A process is holding at least one resource and waiting to acquire additional resources held by other processes.

Example: P1 holds Printer and is waiting for Scanner.

3. No Preemption

Resources cannot be forcibly taken away from a process. A resource can only be released voluntarily by the process holding it.

Example: OS cannot forcibly take the printer away from P1.

4. Circular Wait

A set of processes {P0, P1, ..., Pn} exist such that P0 is waiting for a resource held by P1, P1 is waiting for P2, ..., Pn is waiting for P0.

Example: P1→P2→P3→P1 (circular chain)

Circular Wait Diagram
Fig 1.1 — Circular Wait causing Deadlock

1.3 Resource Allocation Graph (RAG)

📖 Definition: A Resource Allocation Graph is a directed graph used to describe deadlocks. It shows which process holds which resource and which resource is requested by which process.
RAG Components:
  • Process Node: Represented by a circle (○) → P1, P2, P3
  • Resource Node: Represented by a rectangle (□) → R1, R2, R3
  • Request Edge: Arrow from Process → Resource (Pi → Rj) means Pi is requesting Rj
  • Assignment Edge: Arrow from Resource → Process (Rj → Pi) means Rj is assigned to Pi
RAG WITHOUT Deadlock: RAG WITH Deadlock: (P1) ──request──► [R1] (P1) ──request──► [R1] │ │ assign assign │ │ ▼ ▼ (P2) (P2)──request──►[R2] │ assign │ ▼ (P1) ← Cycle! = Deadlock
Fig 1.2 — Resource Allocation Graph Examples
⚠️ RAG Rule:
  • If graph has NO cycle → No deadlock
  • If graph HAS a cycle → Deadlock MAY exist (definitely if single instance per resource)

1.4 Deadlock Prevention

📖 Definition: Deadlock prevention means designing the system so that at least one of the four necessary conditions for deadlock can never hold. We eliminate the possibility of deadlock occurring.
✅ Method 1 — Eliminate Mutual Exclusion:

Make resources shareable wherever possible.

Problem: Not all resources can be shared (printer, tape drive)

✅ Method 2 — Eliminate Hold & Wait:

Two approaches:

  • Approach 1: Process must request ALL resources before starting execution. If all available → proceed. Else → wait without holding anything.
  • Approach 2: Process can request resources only when it has NONE.

Problem: Low resource utilization, starvation possible

✅ Method 3 — Allow Preemption:

If a process holding resources requests a new resource that is not available:

  • All currently held resources are preempted (taken away)
  • Process added to waiting list
  • Process restarts only when it gets ALL needed resources

Problem: Only works for resources whose state can be saved (CPU registers, memory)

✅ Method 4 — Eliminate Circular Wait:

Impose a total ordering on all resource types. Process can only request resources in increasing order of enumeration.

Example: R1=1, R2=2, R3=3 → Process must request R1 before R2 before R3. No circular wait possible!

❌ Disadvantages of Prevention:
  • Low resource utilization
  • Reduced system throughput
  • Starvation may occur

1.5 Deadlock Avoidance & Safe State

📖 Definition: Deadlock avoidance requires that the OS have additional information about how resources will be requested. The system dynamically examines the resource allocation state to ensure circular wait never exists.
📖 Safe State: A state is safe if the system can allocate resources to each process in some order and still avoid deadlock. A safe sequence {P1, P2, ..., Pn} exists such that for each Pi, the resources Pi can still request can be satisfied by currently available resources plus resources held by all Pj (j < i).
💡 Safe vs Unsafe State:
  • Safe State: System can guarantee completion of all processes → No deadlock
  • Unsafe State: No guarantee → Deadlock MAY occur (not necessarily)
  • Deadlock State: Processes are already stuck → Deadlock occurred
State Relationship Diagram
Fig 1.3 — Safe, Unsafe & Deadlock States

🏦 Banker's Algorithm

📖 Banker's Algorithm: A deadlock avoidance algorithm developed by Dijkstra. It's called Banker's Algorithm because it is analogous to a bank that allocates cash to customers. The bank never allocates cash in a way that leads to an unsafe state.
Data Structures Used:
  • Available[j]: Number of available resources of type j
  • Max[i][j]: Maximum demand of process i for resource type j
  • Allocation[i][j]: Number of resources of type j currently allocated to process i
  • Need[i][j]: Remaining resource need of process i for type j
Key Formula: Need[i][j] = Max[i][j] - Allocation[i][j]
Safety Algorithm (Step-by-step):
  • Step 1: Let Work = Available, Finish[i] = false for all i
  • Step 2: Find index i such that Finish[i] = false AND Need[i] ≤ Work
  • Step 3: Work = Work + Allocation[i], Finish[i] = true → Go to Step 2
  • Step 4: If Finish[i] = true for all i → System is in SAFE STATE
💡 Solved Example (PYQ Pattern):

Given: 5 Processes (P0–P4), 3 Resource Types (A, B, C)
Available: A=3, B=3, C=2

Process Alloc A Alloc B Alloc C Max A Max B Max C Need A Need B Need C
P0010753743
P1200322122
P2302902600
P3211222011
P4002433431

Step 1: Work = [3,3,2], Finish = [F,F,F,F,F]

Step 2: Find process where Need ≤ Work

  • P1: Need=[1,2,2] ≤ [3,3,2] ✓ → Work=[3,3,2]+[2,0,0]=[5,3,2], Finish[1]=T
  • P3: Need=[0,1,1] ≤ [5,3,2] ✓ → Work=[5,3,2]+[2,1,1]=[7,4,3], Finish[3]=T
  • P4: Need=[4,3,1] ≤ [7,4,3] ✓ → Work=[7,4,3]+[0,0,2]=[7,4,5], Finish[4]=T
  • P0: Need=[7,4,3] ≤ [7,4,5] ✓ → Work=[7,4,5]+[0,1,0]=[7,5,5], Finish[0]=T
  • P2: Need=[6,0,0] ≤ [7,5,5] ✓ → Work=[7,5,5]+[3,0,2]=[10,5,7], Finish[2]=T

Safe Sequence: P1 → P3 → P4 → P0 → P2 ✅

1.6 Deadlock Detection

📖 Definition: If neither prevention nor avoidance is used, deadlock may occur. The system must provide an algorithm to detect whether deadlock has occurred and identify the processes/resources involved.

Single Instance Resources — Wait-for Graph

Wait-for Graph:
  • Obtained by removing resource nodes from RAG
  • Pi → Pj means Pi is waiting for Pj to release a resource
  • Deadlock exists if and only if there is a cycle in wait-for graph
  • Algorithm periodically invokes a cycle detection algorithm (O(n²))
RAG: Wait-for Graph: (P1)──►[R1]──►(P2) (P1)──────►(P2) ▲ │ ▲ │ │ [R2] │ │ └──────────────┘ └───────────┘ Cycle → Deadlock!
Fig 1.4 — RAG to Wait-for Graph conversion

Multiple Instance Resources — Detection Algorithm

Detection Algorithm (Similar to Banker's):
  • Step 1: Work = Available, Finish[i] = false (if Allocation[i] ≠ 0), else true
  • Step 2: Find i such that Finish[i] = false AND Request[i] ≤ Work
  • Step 3: Work = Work + Allocation[i], Finish[i] = true → Go to Step 2
  • Step 4: If Finish[i] = false for some i → Process Pi is DEADLOCKED
⚠️ When to invoke detection algorithm?
  • Every time a request cannot be granted immediately
  • At fixed time intervals (every hour)
  • When CPU utilization drops below 40% (sign of deadlock)

1.7 Recovery from Deadlock

📖 Definition: Once a deadlock has been detected, the system must recover. There are two main options: Process Termination or Resource Preemption.
✅ Method 1 — Process Termination:

Option A: Abort all deadlocked processes

  • Breaks deadlock cycle immediately
  • Very expensive — lost computation work

Option B: Abort one process at a time

  • Abort minimum processes to break deadlock
  • Run detection algorithm after each abort
  • Choose victim: lowest priority, least work done, fewest resources
✅ Method 2 — Resource Preemption:
  • Select a victim: Which resources from which process to preempt? (minimize cost)
  • Rollback: Return process to some safe state, restart from there
  • Starvation: Same process may always be picked as victim → must include number of rollbacks in cost factor
Approach Prevention Avoidance Detection
Strategy Deny conditions Safe state check Find & recover
Overhead High Medium Low
Utilization Low Medium High
Starvation Possible Possible Possible
Algorithm Ordering Banker's Wait-for Graph
💾 Chapter 2 — Memory Management
📊 MEMORY MANAGEMENT — MIND MAP
Address Binding → Compile time, Load time, Execution time
Logical vs Physical → Logical = CPU generated, Physical = actual RAM address
Swapping → Move process between RAM and disk
Contiguous Allocation → Fixed partition, Variable partition
Fragmentation → Internal (wasted inside) vs External (wasted outside)
Paging → Divide memory into fixed frames, process into pages
Segmentation → Divide process into logical segments
Virtual Memory → Run process larger than RAM using disk
Page Replacement → FIFO, LRU, Optimal algorithms

2.1 Address Binding

📖 Definition: Address binding is the process of mapping symbolic addresses (used in program) to actual physical memory addresses. It determines WHEN the final physical address is assigned to program instructions and data.
Type 1 — Compile Time Binding:
  • Physical address known at compile time
  • Absolute code is generated by compiler
  • Process must always load at same memory location
  • If starting address changes → must recompile
  • Example: Early DOS programs, .COM files
Type 2 — Load Time Binding:
  • Physical address determined when process is loaded into memory
  • Compiler generates relocatable code
  • If starting address changes → must reload (not recompile)
  • Example: Programs using relocatable addresses
Type 3 — Execution Time Binding (Runtime Binding):
  • Physical address determined during execution
  • Process can be moved in memory during execution
  • Requires hardware support (MMU — Memory Management Unit)
  • Used by most modern OS
  • Example: Modern Windows, Linux processes
Address Binding Timeline: Source Compile Object Link/Load Execution Code ──────► Time ──────► Code ──────► Time ──────► Time (symbols) (if compile (relocatable (if load (if runtime time binding) addresses) time binding) binding)
Fig 2.1 — Address Binding Stages

2.2 Logical vs Physical Address Space

📖 Logical Address (Virtual Address): Address generated by the CPU during program execution. It is what the program "sees" and uses. Also called virtual address.
📖 Physical Address: Actual address in physical memory (RAM). This is where data is actually stored. Seen only by memory hardware.
💡 Real-life Analogy:

Logical Address = Your House Number in a city map (what you tell others)

Physical Address = GPS coordinates of your house (actual location)

A mapping system (like GPS) converts house number → GPS coordinates, just like MMU converts logical → physical address.

Feature Logical Address Physical Address
Generated byCPUMMU (Hardware)
User visibleYesNo
Also calledVirtual addressReal address
RangeLogical address spacePhysical address space
BindingCompile/Load timeRuntime
MMU Translation:
  • Physical Address = Logical Address + Base Register (Relocation Register)
  • Example: Base Register = 14000, Logical Address = 346 → Physical Address = 14000 + 346 = 14346

2.3 Dynamic Loading

📖 Definition: A routine (function/module) is not loaded into memory until it is called. The entire program does NOT need to be in memory at start. Unused routines are never loaded.
✅ Advantages:
  • Better memory utilization — only needed routines loaded
  • Useful when large amounts of code rarely used (error handling)
  • No special OS support required — responsibility of programmer
💡 Example: A program has 10 error-handling routines. Without dynamic loading, all 10 are loaded even if only 1 is ever called. With dynamic loading, only the called routine is loaded — saving memory.

2.4 Swapping

📖 Definition: Swapping is a memory management technique where a process is temporarily moved (swapped out) from main memory to a backing store (disk), and later brought back (swapped in) into memory for continued execution.
💡 Real-life Analogy:

Think of RAM as your study table and hard disk as your bookshelf. When your table is full, you keep some books back on the shelf (swap out) and bring them back when needed (swap in).

Swapping Process Diagram
Fig 2.2 — Swapping between RAM and Disk
✅ Advantage: Total physical address space of all processes can exceed actual physical memory → more processes can run.
❌ Disadvantage: Swap time is mainly transfer time. If process is 100MB, swap-out + swap-in = 200MB transfer → very slow!

2.5 Contiguous Memory Allocation

📖 Definition: In contiguous memory allocation, each process occupies a single contiguous section of memory. Memory is divided into partitions for processes.

Fixed Partition (Static)

Fixed Partition Scheme:
  • Memory divided into fixed-size partitions at boot time
  • Each partition holds exactly one process
  • Degree of multiprogramming = number of partitions
  • Problem: Internal fragmentation if process smaller than partition

Variable Partition (Dynamic)

Variable Partition Scheme:
  • OS maintains table of free and allocated memory
  • Hole = block of available memory
  • When process arrives → allocate from a large enough hole

Memory Allocation Strategies

Strategy How it works Speed Fragmentation
First Fit Allocate first hole that is big enough Fastest Medium
Best Fit Allocate smallest hole that is big enough Slowest Least
Worst Fit Allocate largest available hole Slow Most

2.6 Fragmentation

📖 Definition: Fragmentation is the condition where memory space is available but cannot be used effectively. It is the waste of memory.
❌ Type 1 — Internal Fragmentation:

Memory allocated to a process is larger than required. Wasted space is INSIDE the allocated partition.

Example: Partition = 100KB, Process needs 82KB → 18KB wasted (internal)

Caused by: Fixed partitioning

❌ Type 2 — External Fragmentation:

Total free memory is enough but not contiguous. Free spaces scattered throughout memory, cannot accommodate process even though enough total space exists.

Example: Free = 30KB + 20KB + 10KB = 60KB, but process needs 50KB contiguous → fails!

Caused by: Variable partitioning

External Fragmentation Diagram
Fig 2.3 — External Fragmentation Example
✅ Solution — Compaction:

Shuffle memory contents to place all free memory together. Creates large block of free space. Possible only if binding is done at execution time.

2.7 Paging

📖 Definition: Paging is a memory management scheme that allows the physical address space of a process to be non-contiguous. Physical memory is divided into fixed-size blocks called frames. Logical memory is divided into blocks of same size called pages.
💡 Real-life Analogy:

Think of a notebook with numbered pages. You can write different parts of your notes on any page — they don't need to be next to each other. A table of contents (page table) tells you which page has which topic. Same way, paging maps logical pages to physical frames.

Key Concepts:
  • Page: Fixed-size block of logical memory (e.g., 4KB)
  • Frame: Fixed-size block of physical memory (same size as page)
  • Page Table: Maps page number to frame number
  • Page Number (p): Index into page table
  • Page Offset (d): Offset within the page
Address Translation:
  • Logical Address: (Page Number p, Offset d)
  • Physical Address: (Frame Number f, Offset d)
  • Page Size: 2n bytes → offset = n bits
  • Logical Address Space: 2m → page number = m-n bits
  • Example: Page size = 4KB = 212 → offset = 12 bits; 32-bit address → page number = 32-12 = 20 bits → 220 = 1M pages
Paging Address Translation Diagram
Fig 2.4 — Paging Address Translation
✅ Advantages of Paging:
  • No external fragmentation
  • Simple memory allocation
  • Easy to implement shared memory
  • Process does not need contiguous memory
❌ Disadvantages:
  • Internal fragmentation (last page may not be full)
  • Page table overhead (extra memory for page table)
  • Two memory accesses per data access (page table + actual data)

TLB — Translation Lookaside Buffer

📖 TLB: A fast hardware cache (associative memory) that stores recently used page-to-frame mappings. Speeds up paging by avoiding page table lookup in most cases.
Effective Access Time (EAT):
  • Formula: EAT = Hit Ratio × (TLB time + Memory time) + (1 - Hit Ratio) × (TLB time + 2 × Memory time)
  • Example: TLB time = 20ns, Memory time = 100ns, Hit ratio = 80%
  • EAT = 0.8 × (20+100) + 0.2 × (20+200) = 0.8 × 120 + 0.2 × 220 = 96 + 44 = 140ns

2.8 Segmentation

📖 Definition: Segmentation is a memory management scheme that supports the programmer's view of memory. A program is a collection of segments — each segment is a logical unit such as main program, function, object, stack, etc.
💡 Real-life Analogy:

Think of a book divided into chapters (Introduction, Theory, Examples, Index). Each chapter is a segment of different length. Segmentation divides a program into meaningful logical parts of different sizes.

Segmentation Address:
  • Logical address = (Segment number s, Offset d)
  • Segment Table: Each entry has Base (starting physical address) and Limit (length of segment)
  • Physical address = Base[s] + d (if d < Limit[s])
Segmentation Address Translation:
  • Logical Address: (Segment s, Offset d)
  • Step 1: Look up Segment Table[s] → get Base and Limit
  • Step 2: If d >= Limit → Trap (Segmentation Fault!)
  • Step 3: Physical Address = Base + d
Feature Paging Segmentation
Division basisFixed size (pages)Logical size (segments)
SizeEqual sizeVariable size
FragmentationInternal onlyExternal only
User visibleNoYes
Address format(page, offset)(segment, offset)
Table usedPage TableSegment Table

Segmentation with Paging

📖 Segmentation with Paging: Combines both techniques. Each segment is further divided into pages. Eliminates external fragmentation (from segmentation) while keeping logical view of program (from segmentation). Used in Intel x86 architecture.
Address Translation:
  • Logical address → (Segment, Page, Offset)
  • Segment table → base address of page table for that segment
  • Page table → frame number
  • Physical address = Frame + Offset

2.9 Virtual Memory Concept

📖 Definition: Virtual memory is a technique that allows the execution of processes that may not be completely in memory. The logical address space can be much larger than physical memory. It gives an illusion of very large memory.
💡 Real-life Analogy:

Imagine your laptop has only 8GB RAM but you can run programs that need 16GB. The OS uses your hard disk as extra "virtual" RAM. You see 16GB of memory even though only 8GB is physically available.

✅ Benefits of Virtual Memory:
  • Programs can be larger than physical memory
  • More programs can run simultaneously
  • Less I/O needed to load/swap processes
  • Efficient process creation (copy-on-write)
Virtual Memory Concept: Process View Physical Memory (RAM) Disk (Swap Space) ┌──────────┐ ┌──────────────────┐ ┌──────────────────┐ │ Page 0 │───────►│ Frame 3 (Page 0) │ │ │ │ Page 1 │ │ Frame 7 (Page 2) │ │ Page 1 (on disk) │ │ Page 2 │───────►│ Frame 1 (Page 4) │ │ Page 3 (on disk) │ │ Page 3 │ │ ... │ │ │ │ Page 4 │───────►│ │ └──────────────────┘ └──────────┘ └──────────────────┘ (16GB virtual) (8GB physical) Pages not in RAM are on disk!
Fig 2.5 — Virtual Memory Mapping

2.10 Demand Paging

📖 Definition: Demand paging is a type of virtual memory management where pages are loaded into memory only when they are demanded (accessed) — not all at once. If page is not in memory, a page fault occurs and the OS loads the page.
Page Fault Handling Steps:
  • Step 1: CPU accesses a page → check page table
  • Step 2: If valid bit = 0 → Page Fault! Trap to OS
  • Step 3: OS finds page on disk (backing store)
  • Step 4: Find free frame in memory
  • Step 5: Bring page from disk to frame
  • Step 6: Update page table (valid bit = 1)
  • Step 7: Restart the instruction that caused page fault
Effective Access Time (EAT) with Demand Paging:
  • Formula: EAT = (1 - p) × memory access time + p × page fault time
  • Where: p = page fault rate (0 ≤ p ≤ 1); p = 0 → no page faults (ideal); p = 1 → every access is a page fault
  • Example: Memory access = 200ns, Page fault time = 8ms = 8,000,000ns, p = 0.001 (1 in 1000)
  • EAT = 0.999 × 200 + 0.001 × 8,000,000 = 199.8 + 8000 = 8199.8 ns ≈ 8.2 microseconds

2.11 Page Replacement & Algorithms

📖 Page Replacement: When a page fault occurs and no free frame is available, the OS must replace an existing page in memory with the required page. The algorithm decides WHICH page to replace.
⚠️ Goal: Minimize the number of page faults. Choose the page to replace that will result in the fewest future page faults.

Algorithm 1 — FIFO (First In First Out)

📖 FIFO: Replace the page that has been in memory the longest (first one that came in is first to go out).
💡 Solved Example:

Reference String: 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2
Frames = 3

Ref: 7 0 1 2 0 3 0 4 2 3 0 3 2 ──────────────────────────────────────────────────────── F1: 7 7 7 2 2 2 2 4 4 4 0 0 0 F2: - 0 0 0 0 3 3 3 2 2 2 2 2 F3: - - 1 1 1 1 0 0 0 3 3 3 3 ──────────────────────────────────────────────────────── PF: ✓ ✓ ✓ ✓ - ✓ ✓ ✓ ✓ ✓ ✓ - - Total Page Faults = 9
❌ Belady's Anomaly: For FIFO, increasing number of frames may actually increase page faults! (This is the anomaly.)

Algorithm 2 — Optimal Page Replacement (OPT)

📖 OPT/MIN: Replace the page that will NOT be used for the longest period of time in the future. Gives minimum page faults — but requires future knowledge (not practically implementable). Used as benchmark.
💡 Solved Example:

Reference String: 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2 | Frames = 3

Strategy: Replace page used FARTHEST in future Ref: 7 0 1 2 0 3 0 4 2 3 0 3 2 ──────────────────────────────────────────────────────── F1: 7 7 7 2 2 2 2 2 2 2 2 2 2 F2: - 0 0 0 0 0 0 4 4 4 0 0 0 F3: - - 1 1 1 3 3 3 3 3 3 3 3 ──────────────────────────────────────────────────────── PF: ✓ ✓ ✓ ✓ - ✓ - ✓ - - ✓ - - Total Page Faults = 6 (Minimum possible!)

Algorithm 3 — LRU (Least Recently Used)

📖 LRU: Replace the page that has not been used for the longest time in the past. Based on the principle of locality — recently used pages will likely be used again soon.
💡 Solved Example:

Reference String: 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2 | Frames = 3

Strategy: Replace page NOT used for longest time Ref: 7 0 1 2 0 3 0 4 2 3 0 3 2 ──────────────────────────────────────────────────────── F1: 7 7 7 2 2 2 0 0 0 3 3 3 2 F2: - 0 0 0 0 3 3 3 2 2 2 2 2 * F3: - - 1 1 1 1 1 4 4 4 0 0 0 ──────────────────────────────────────────────────────── PF: ✓ ✓ ✓ ✓ - ✓ ✓ ✓ ✓ ✓ ✓ - - Total Page Faults = 8
Algorithm Replace Page Faults Practical? Anomaly
FIFO Oldest page in memory Most Yes Belady's
OPT Farthest future use Minimum No None
LRU Least recently used Medium Yes None
💿 Chapter 3 — Device Management
📊 DEVICE MANAGEMENT — MIND MAP
Disk Structure → Platter, Track, Sector, Cylinder, Head
Disk Formatting → Low-level, Partition, Logical formatting
Disk Scheduling → Minimize seek time → FCFS, SSTF, SCAN, C-SCAN, LOOK, C-LOOK
Seek Time → Time to move head to correct track
Rotational Latency → Time for sector to rotate under head
Transfer Time → Time to actually transfer data

3.1 Disk Structure

📖 Definition: A magnetic disk consists of circular platters coated with magnetic material. Data is stored on the surface of these platters and read/written by read-write heads.
Key Disk Components:
  • Platter: Circular disk coated with magnetic material. A disk may have multiple platters stacked together.
  • Track: Concentric circles on each platter surface. Each platter has hundreds of tracks.
  • Sector: Smallest unit of storage. Each track is divided into sectors (typically 512 bytes or 4KB).
  • Cylinder: Set of tracks at same position on all platters (same arm position across all surfaces).
  • Read-Write Head: Reads/writes data. One head per platter surface. All heads move together.
  • Disk Arm: Moves heads to correct track position.
Disk Structure Diagram
Fig 3.1 — Disk Structure
Disk Access Time:
  • Formula: Total Access Time = Seek Time + Rotational Latency + Transfer Time
  • Seek Time: Time to move arm to correct track
  • Rotational Latency: Time for sector to rotate under head
  • Transfer Time: Time to read/write actual data
  • Average Rotational Latency: 1/2 × rotation time (sector is halfway around on average)

3.2 Disk Formatting

📖 Definition: Disk formatting is the process of preparing a disk for use by an operating system. It involves multiple stages to make the disk ready for data storage.
Stage 1 — Low-Level Formatting (Physical Formatting):
  • Divides disk into sectors that disk controller can read/write
  • Each sector = Header + Data area + Trailer (ECC)
  • Done at manufacturing time
  • ECC (Error Correcting Code) helps detect/correct errors
Stage 2 — Partition (Dividing Disk):
  • Divides disk into one or more groups of cylinders
  • Each partition treated as separate disk by OS
  • Example: C: drive, D: drive on Windows
  • Partition info stored in Partition Table
Stage 3 — Logical Formatting (High-Level Formatting):
  • Creates file system on partition
  • Stores initial file-system data structures onto disk
  • Creates FAT/inode structures, root directory, free space list
  • Example: Format as NTFS, FAT32, ext4
⚠️ Boot Block: To boot OS, the computer first runs a tiny bootstrap program stored in ROM/EEPROM. This loads the full bootstrap loader from a fixed location on disk (Boot Block / MBR — Master Boot Record).

3.3 Disk Scheduling Algorithms

📖 Definition: Disk scheduling algorithms determine the order in which disk I/O requests are serviced to minimize seek time and maximize disk throughput.
⚠️ Common Example Setup (used in all algorithms below):

Head starts at: 53
Request Queue: 98, 183, 37, 122, 14, 124, 65, 67
Disk range: 0 to 199

Algorithm 1 — FCFS (First Come First Served)

📖 FCFS: Service requests in the order they arrive. Simplest algorithm — no starvation but may result in a lot of head movement.
💡 FCFS Example:
Head starts at 53 Order: 53→98→183→37→122→14→124→65→67 Movement: 53 → 98 = 45 98 → 183 = 85 183 → 37 = 146 37 → 122 = 85 122 → 14 = 108 14 → 124 = 110 124 → 65 = 59 65 → 67 = 2 ───────────── Total = 640 cylinders
❌ Disadvantage: Does not optimize head movement. Wild swings across disk → poor performance.

Algorithm 2 — SSTF (Shortest Seek Time First)

📖 SSTF: Service the request closest to current head position. Like SJF for disks — always pick nearest request.
💡 SSTF Example:
Head starts at 53 Queue: 98, 183, 37, 122, 14, 124, 65, 67 Step 1: Closest to 53 → 65 (dist=12) → move to 65 Step 2: Closest to 65 → 67 (dist=2) → move to 67 Step 3: Closest to 67 → 37 (dist=30) → move to 37 Step 4: Closest to 37 → 14 (dist=23) → move to 14 Step 5: Closest to 14 → 98 (dist=84) → move to 98 Step 6: Closest to 98 → 122 (dist=24) → move to 122 Step 7: Closest to 122→ 124 (dist=2) → move to 124 Step 8: Closest to 124→ 183 (dist=59) → move to 183 ───────────────────────────────────────── Order: 53→65→67→37→14→98→122→124→183 Total = 236 cylinders (Much better than FCFS!)
❌ Disadvantage: Starvation possible — requests far from current head may wait forever if closer requests keep arriving.

Algorithm 3 — SCAN (Elevator Algorithm)

📖 SCAN: Disk arm moves in one direction servicing all requests until it reaches end of disk, then reverses direction. Like an elevator going up then down.
💡 SCAN Example (moving towards higher):
Head starts at 53, moving towards higher numbers Queue: 98, 183, 37, 122, 14, 124, 65, 67 Moving UP (towards 199): 53→65→67→98→122→124→183→199 (end) Then reverses, moving DOWN: 199→37→14→0 Order: 53→65→67→98→122→124→183→37→14 Total = (183-53) + (183-14) = 130 + 169 = 299 cylinders Visual: 0 14 37 53 65 67 98 122 124 183 199 |────|────|───|───|──|─────|───|───|───────|──────| ◄────────── │ ─────────────────────► Start
✅ No starvation — every request eventually served.

Algorithm 4 — C-SCAN (Circular SCAN)

📖 C-SCAN: Like SCAN but only services requests while moving in ONE direction (usually high to low). When it reaches end, it jumps back to beginning without servicing any requests on return. More uniform wait time than SCAN.
💡 C-SCAN Example:
Head at 53, moving towards higher numbers Queue: 98, 183, 37, 122, 14, 124, 65, 67 Moving UP: 53→65→67→98→122→124→183→199 Jump to 0 (no service on return) Moving UP again: 0→14→37 Order: 53→65→67→98→122→124→183→199→0→14→37 Visual: 0 14 37 53 65 67 98 122 124 183 199 |────|────|───|───|──|─────|───|───|───────|──────| ──►──┤ ┤ ├──────────────────────────────► (return) (continue here next cycle)

Algorithm 5 — LOOK

📖 LOOK: Like SCAN, but arm only goes as far as the last request in each direction — does NOT go all the way to end of disk. More efficient than SCAN.
💡 LOOK Example:
Head at 53, moving towards higher numbers Queue: 98, 183, 37, 122, 14, 124, 65, 67 Moving UP (only to 183, NOT to 199): 53→65→67→98→122→124→183 Reverse, moving DOWN (only to 14, NOT to 0): 183→37→14 Order: 53→65→67→98→122→124→183→37→14 Total = (183-53) + (183-14) = 130+169 = 299 cylinders (Same movement but doesn't go to 199 or 0)

Algorithm 6 — C-LOOK

📖 C-LOOK: Combination of C-SCAN and LOOK. Services requests in one direction only up to last request, then jumps back to first request (not beginning of disk). Most efficient in practice.
💡 C-LOOK Example:
Head at 53, moving towards higher numbers Queue: 98, 183, 37, 122, 14, 124, 65, 67 Moving UP (to last request = 183): 53→65→67→98→122→124→183 Jump back to FIRST request (14), not 0: 183 → 14 (jump, no service) Continue UP: 14→37 Order: 53→65→67→98→122→124→183→14→37 Total = (183-53) + (183-14) = 130+169 = 299 cylinders
Algorithm Movement Starvation Total (Example) Best For
FCFS Request order No 640 Light load
SSTF Nearest first Yes 236 Performance
SCAN Both directions, full sweep No 299 Heavy load
C-SCAN One direction, jump back No ~382 Uniform wait
LOOK Both directions, to last request No 299 Better than SCAN
C-LOOK One direction, jump to first No ~322 Best practical
Ready for Exam? Sab padh liya? Ab Quick Revision karo — formulas, key points aur common mistakes ek jagah! Quick Revision Karo →
Quick Revision — Read in 10 Minutes!
📌 How to use: Read this 10 minutes before exam. All key points, formulas and exam tricks in one place!

🔒 Deadlocks — Must Know

🔴 4 Conditions (ALL must hold)
📖 What is it? Four conditions that must exist together for deadlock to happen. Break even ONE and deadlock cannot occur.
🔑 Key Points:
  • Mutual Exclusion: Resource used by one process only
  • Hold & Wait: Process holds resource while waiting for another
  • No Preemption: Cannot forcibly take resource from process
  • Circular Wait: P1 waits for P2, P2 waits for P3, P3 waits for P1
💡 Example: Two friends eating. Friend A needs fork but holds knife (Hold). Friend B needs knife but holds fork (Hold). Both wait forever ◄►
🛡️ Prevention Methods
📖 What is it? Ways to prevent deadlock by breaking at least one of the four conditions.
🔑 Key Points:
  • Break Mutual Exclusion: Share the resource (not always possible)
  • Break Hold & Wait: Request ALL resources at once, or request one at a time
  • Break No Preemption: Allow taking resources forcibly from processes
  • Break Circular Wait: Order all resources and request in order only
💡 Example: Instead of asking for fork first, then knife - ask for BOTH at once. If you can't get both, don't grab fork.
🏦 Banker's Algorithm
📖 What is it? A deadlock avoidance algorithm that checks if a resource request is safe before granting it.
🔑 Key Formula:
Need[i][j] = Max[i][j] - Allocation[i][j]
Steps:
  1. Calculate Need for each process
  2. Find process where Need ≤ Available resources
  3. Release that process (add back its resources)
  4. Repeat until all processes finish = SAFE STATE ✓
💡 Example: Bank checks: "Can I give this loan safely?" If all customers can eventually withdraw, then GRANT the loan.
🔍 Detection vs Recovery
📖 What is it? After deadlock happens, detect it and recover by removing deadlocked processes or resources.
🔑 Key Points:
  • Detection: Wait-for Graph (single resource) or Detection Algorithm (multiple resources)
  • Recovery Options: Terminate process OR Preempt resources and restart
  • Starvation Risk: Same process always becomes victim → it never completes
💡 Example: Deadlock happens → OS detects cycle → kills one process → others resume. But if same process always killed → starvation!
QUICK FORMULA: Need[i][j] = Max[i][j] - Allocation[i][j] Safe State → if safe sequence exists RAG: Cycle + single instance = DEADLOCK RAG: Cycle + multiple instance = MAYBE deadlock

💾 Memory Management — Must Know

📍 Address Binding
📖 What is it? When does CPU get to know the actual memory address where program will run?
🔑 Key Points:
  • Compile Time: Address fixed at compile time → program always at same location (Old)
  • Load Time: Address decided when program loaded → can move later (Better)
  • Execution Time: Address can change during execution using MMU (Modern OS - Best)
💡 Example: Compile time = writing phone number in pen. Load time = writing on paper (moveable). Execution time = GPS (updates location continuously).
🧠 Logical vs Physical
📖 What is it? CPU thinks it's using one address, but actually it's using a different one. MMU does the translation.
🔑 Key Points:
  • Logical Address: What CPU generates (CPU's view)
  • Physical Address: Actual RAM address (Real location)
  • MMU: Hardware that converts logical → physical using Base Register
  • Formula: Physical = Logical + Base Register
💡 Example: CPU asks for address 100. Base Register = 5000. Actual RAM location = 5100. CPU never knows the truth!
📦 Fragmentation
📖 What is it? Wasted memory space that cannot be used by processes.
🔑 Key Points:
  • Internal: Wasted INSIDE a partition (fixed size) → Solve: use Paging
  • External: Wasted BETWEEN partitions (scattered free space) → Solve: Compaction or Paging
  • Paging: Solves EXTERNAL fragmentation (not internal)
💡 Example: Partition = 10KB, Program = 8KB → 2KB wasted inside (Internal). RAM = [10KB used][5KB free][10KB used][3KB free] → 3KB useless scattered piece (External).
📄 Paging Key Points
📖 What is it? Divide program into equal-sized pieces (pages) and fit them into RAM pieces (frames). No external fragmentation!
🔑 Key Points:
  • Page: Logical unit (program's view)
  • Frame: Physical unit (RAM's view)
  • Page Table: Maps page → frame number
  • External Fragmentation: ELIMINATED ✓
  • Internal Fragmentation: Only in LAST page (Small)
  • TLB: Fast cache for page table → speeds up translation
💡 Example: Book = Program, Pages = 4KB chunks, Frames in RAM = where pages fit. TLB = quickly remembers recent page-to-frame mappings.
📑 Segmentation vs Paging
📖 What is it? Two different memory management approaches with different pros/cons.
🔑 Comparison:
  • Paging: Fixed size, no external frag, but invisible to programmer
  • Segmentation: Variable size, logical view (DATA, CODE, STACK), but external fragmentation
  • Seg+Paging: Combine both = best of both worlds!
💡 Example: Book = Segmentation (chapters of different sizes). Book + divided pages = Seg+Paging (each chapter has numbered pages inside).
🖥️ Virtual Memory
📖 What is it? Illusion of more memory than physically available. Use disk as extra RAM.
🔑 Key Points:
  • Demand Paging: Load page only when accessed (not all at start)
  • Page Fault: Page not in RAM → load from disk
  • EAT Formula: EAT = (1-p)×memory_time + p×page_fault_time
  • Example: p=0.001 (1 in 1000) → page fault adds ~8 microseconds
💡 Example: Laptop has 8GB RAM. Program needs 16GB. Pages not needed → stay on disk. Only active pages in RAM. MAGIC appears! ✨
PAGING FORMULAS: Physical = Base + Logical Page size = 2^n → offset = n bits TLB EAT = h×(t+m) + (1-h)×(t+2m) h=hit ratio, t=TLB time, m=memory time PAGE REPLACEMENT (same ref string: 7,0,1,2,0,3,0,4,2,3,0,3,2 | 3 frames): FIFO → 9 page faults (oldest replaced) OPT → 6 page faults (farthest future — minimum) LRU → 8 page faults (least recently used) Belady's Anomaly → only in FIFO (more frames = more faults possible)

💿 Device Management — Must Know

⏱️ Disk Access Time
📖 What is it? Total time to access a block on disk = Time to reach + Time to rotate + Time to read.
🔑 Formula:
Total = Seek + Rotational Latency + Transfer
Each Component:
  • Seek Time: Arm moves to correct track (~5-10ms)
  • Rotational Latency: Sector rotates under head (~3-8ms average)
  • Transfer Time: Actual reading/writing data (~small)
💡 Example: Arm moves to track (6ms) + wait for sector (4ms) + read data (1ms) = 11ms total. Most time spent moving arm!
🗂️ Disk Formatting Stages
📖 What is it? Three stages to prepare a blank disk to store files.
🔑 Stages:
  • Stage 1 - Low-level: Divide into sectors (done at factory). Add headers and error correction codes.
  • Stage 2 - Partition: Divide disk into logical drives (C:, D:, E: etc). Each partition independent.
  • Stage 3 - Logical: Create file system (NTFS, ext4). Define how files stored and accessed.
💡 Example: New disk → Divide into sectors → Split into partitions → Install Windows = now can store files!
📊 Scheduling Summary
📖 What is it? Different algorithms to decide which disk request to serve first.
🔑 Algorithms:
  • FCFS: First come first served (simple but slow) ❌
  • SSTF: Shortest seek time first (fastest BUT starvation risk) ⚡
  • SCAN: Elevator (arm goes all the way) → No starvation ✓
  • C-SCAN: Circular SCAN (serves one direction only) → Fair
  • LOOK: SCAN but stops at last request (not end) → Better
  • C-LOOK: Best practical choice (circular + stops at request)
💡 Exam Tip: SSTF = fastest but unfair. C-LOOK = best in real systems. SCAN = elevator (never gets stuck).
DISK SCHEDULING (Head=53, Queue=98,183,37,122,14,124,65,67): FCFS → 53→98→183→37→122→14→124→65→67 = 640 SSTF → 53→65→67→37→14→98→122→124→183 = 236 SCAN → 53→65→67→98→122→124→183→37→14 = 299 C-SCAN → 53→65→67→98→122→124→183→14→37 = ~382 LOOK → same path as SCAN but no end travel C-LOOK → same as C-SCAN but jump to first request REMEMBER: SSTF = Best performance but STARVATION possible! C-LOOK = Best practical choice

⚠️ Common Exam Mistakes

❌ Deadlocks:
  • Don't say "any 1 condition causes deadlock" — ALL 4 must hold simultaneously
  • RAG cycle with multiple instances ≠ definite deadlock (only possible)
  • In Banker's: Need = Max - Allocation (NOT the other way!)
  • Safe state ≠ no deadlock (unsafe state can still avoid deadlock)
❌ Memory Management:
  • Internal fragmentation → fixed partition (not variable)
  • External fragmentation → variable partition (not fixed)
  • Paging eliminates external fragmentation (not internal)
  • Belady's Anomaly → only FIFO, not LRU or OPT
  • OPT is not practical (needs future knowledge)
❌ Disk Scheduling:
  • SSTF is NOT same as SCAN — SSTF can starve far requests
  • SCAN goes to END of disk, LOOK only goes to LAST REQUEST
  • C-SCAN jumps from end to beginning (no service on return)
  • C-LOOK jumps from last request to first request (not to 0)

🎯 Last Minute Exam Tips

✅ For Numerical Questions:
  • Banker's: Always draw table with Allocation, Max, Need, Available columns
  • Page Replacement: Show step-by-step frame contents, mark page faults with ✓
  • Disk Scheduling: Show movement order AND total cylinders moved
  • Draw Gantt chart for disk scheduling — examiners love it!
✅ For Theory Questions:
  • Always give definition + example (2 marks minimum)
  • Use comparison tables wherever possible
  • Draw diagrams (RAG, process states, paging translation)
  • Write formulas before solving numericals
Important Questions
📌 How to use: Question pe click karo → answer khulega. Exam pattern aur syllabus ke hisaab se banaye gaye hain.

📝 Section A — 2 Marks Questions

2M
Q1. What is Deadlock? State any two necessary conditions for deadlock.
Deadlock: A situation where a set of processes are permanently blocked — each process is holding a resource and waiting for a resource held by another process in the set. No process can ever proceed.

Two necessary conditions:
1. Mutual Exclusion: At least one resource is non-shareable — only one process can use it at a time.
2. Hold and Wait: A process is holding at least one resource and waiting to acquire more resources held by other processes.
2M
Q2. Define Safe State. How is it related to deadlock avoidance?
Safe State: A state is safe if the system can allocate resources to each process in some order (called safe sequence) such that all processes can eventually complete without deadlock.

Relation: Deadlock avoidance ensures the system never moves into an unsafe state. Before granting any resource request, the OS checks if the new state is safe. If unsafe → request is denied temporarily.
2M
Q3. Differentiate between Logical Address and Physical Address.
Logical AddressPhysical Address
Generated by CPUGenerated by MMU hardware
Visible to programmerNot visible to user/program
Also called Virtual AddressAlso called Real Address
Mapped using page tableActual location in RAM
2M
Q4. What is Internal Fragmentation and External Fragmentation?
Internal Fragmentation: Memory allocated to a process is MORE than required. Wasted space is inside the allocated partition.
Example: Partition = 100KB, Process = 82KB → 18KB wasted inside.

External Fragmentation: Total free memory is enough but NOT contiguous. Cannot accommodate process even though enough total space exists.
Example: Free = 20KB + 15KB + 10KB = 45KB but process needs 40KB contiguous → fails.
2M
Q5. What is Paging? What is the role of a Page Table?
Paging: A memory management scheme that divides physical memory into fixed-size blocks called frames and logical memory into same-size blocks called pages. Eliminates external fragmentation.

Page Table: A data structure that maps each page number (logical) to a frame number (physical). When CPU generates a logical address (page, offset), the page table finds the corresponding frame to form the physical address.
2M
Q6. What is Virtual Memory? Write any two advantages.
Virtual Memory: A technique that allows execution of processes that may not be completely in physical memory. The logical address space can be larger than physical RAM — OS uses disk as extended memory.

Two Advantages:
1. Programs can be larger than physical memory
2. More processes can run simultaneously (increased degree of multiprogramming)
2M
Q7. What is Swapping in OS?
Swapping: A memory management technique where a process is temporarily moved out (swapped out) from main memory to a backing store (disk), and later brought back (swapped in) when needed.

Purpose: Allows more processes to run than physical memory can hold at one time.
Disadvantage: Slow — large transfer time between RAM and disk.
2M
Q8. What is Demand Paging? What happens during a Page Fault?
Demand Paging: Pages are loaded into memory only when they are demanded (accessed), not all at once when process starts.

Page Fault: Occurs when CPU accesses a page not currently in memory (valid bit = 0). OS handles it by:
1. Finding the page on disk
2. Loading it into a free frame
3. Updating page table
4. Restarting the faulted instruction
2M
Q9. Differentiate between SCAN and C-SCAN disk scheduling.
SCANC-SCAN
Moves in both directionsMoves in ONE direction only
Services requests on return trip tooNo service on return — jumps back to start
Non-uniform wait timeMore uniform wait time
Less total movementMore total movement (goes to disk end)
2M
Q10. What is Belady's Anomaly? In which algorithm does it occur?
Belady's Anomaly: The counterintuitive phenomenon in FIFO page replacement where increasing the number of frames can result in MORE page faults instead of fewer.

Occurs only in: FIFO algorithm
Does NOT occur in: LRU and OPT algorithms

Example: With 3 frames → 9 faults, with 4 frames → 10 faults for the same reference string.

📋 Section B — 5 Marks Questions

5M
Q11. Explain Banker's Algorithm with example. Given: 3 processes P1, P2, P3 and 3 resource types A, B, C. Available = [3,3,2]. Allocation: P1=[1,0,0], P2=[0,1,0], P3=[1,1,1]. Max: P1=[3,2,2], P2=[0,2,0], P3=[2,2,2]. Find safe sequence.
Step 1 — Calculate Need = Max - Allocation:
ProcessAlloc A,B,CMax A,B,CNeed A,B,C
P11, 0, 03, 2, 22, 2, 2
P20, 1, 00, 2, 00, 1, 0
P31, 1, 12, 2, 21, 1, 1
Step 2 — Safety Algorithm:
Work = [3,3,2], Finish = [F, F, F]

• P2: Need=[0,1,0] ≤ [3,3,2] ✓ → Work=[3,4,2], Finish[P2]=T
• P3: Need=[1,1,1] ≤ [3,4,2] ✓ → Work=[4,5,3], Finish[P3]=T
• P1: Need=[2,2,2] ≤ [4,5,3] ✓ → Work=[5,5,3], Finish[P1]=T

✅ Safe Sequence: P2 → P3 → P1 (System is SAFE)
5M
Q12. Disk head is at position 53. Request queue: 98, 183, 37, 122, 14, 124, 65, 67. Apply SCAN algorithm (moving towards higher). Calculate total head movement.
SCAN (Elevator Algorithm) — Moving towards higher numbers first:

Sort requests: 14, 37, 65, 67, 98, 122, 124, 183
Head at 53, moving UP:

Moving UP (service 65, 67, 98, 122, 124, 183, then go to 199 end): 53 → 65 → 67 → 98 → 122 → 124 → 183 → 199 Then reverse, moving DOWN (service 37, 14): 199 → 37 → 14 Order: 53→65→67→98→122→124→183→37→14 Total Movement: (65-53)+(67-65)+(98-67)+(122-98)+(124-122)+(183-124)+(199-183)+(199-37)+(37-14) = 12+2+31+24+2+59+16+162+23 = 331 cylinders
5M
Q13. Reference string: 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2 with 3 frames. Apply FIFO and OPT page replacement. Find total page faults for each.
FIFO — Replace oldest page in memory:
Ref: 7 0 1 2 0 3 0 4 2 3 0 3 2 F1: 7 7 7 2 2 2 2 4 4 4 0 0 0 F2: - 0 0 0 0 3 3 3 2 2 2 2 2 F3: - - 1 1 1 1 0 0 0 3 3 3 3 PF: ✓ ✓ ✓ ✓ - ✓ ✓ ✓ ✓ ✓ ✓ - - Total FIFO Page Faults = 9

OPT — Replace page used farthest in future:
Ref: 7 0 1 2 0 3 0 4 2 3 0 3 2 F1: 7 7 7 2 2 2 2 2 2 2 2 2 2 F2: - 0 0 0 0 0 0 4 4 4 0 0 0 F3: - - 1 1 1 3 3 3 3 3 3 3 3 PF: ✓ ✓ ✓ ✓ - ✓ - ✓ - - ✓ - - Total OPT Page Faults = 6 (Minimum possible!)
5M
Q14. Explain Paging with diagram. How is logical address translated to physical address using Page Table?
Paging: Physical memory divided into fixed-size frames, logical memory into pages of same size. Page table maps page → frame.

Address Translation:
CPU generates: Logical Address = (Page No p, Offset d) Step 1: Use page number p as index into Page Table Step 2: Get Frame Number f from Page Table[p] Step 3: Physical Address = f × (frame size) + d Example: Page size = 4KB, Page no = 2, Offset = 100 Page Table: Page 2 → Frame 5 Physical Address = 5 × 4096 + 100 = 20580
Key Points: No external fragmentation. TLB caches page table entries for faster access. Internal fragmentation only in last page.
5M
Q15. Draw RAG for: P1 waiting for R1, R1 allocated to P2, P2 waiting for R2, R2 allocated to P1. Is there a deadlock?
Resource Allocation Graph: (P1) ──request──► [R1] ──assign──► (P2) ▲ │ │ │ assign request │ │ [R2] ◄──────────────────────────── [R2] Wait-for Graph (simplified): P1 ──► P2 ──► P1 ← CYCLE! Since CYCLE EXISTS and each resource has SINGLE instance: ✅ DEADLOCK EXISTS! P1 and P2 are permanently blocked.
5M
Q16. Explain the four necessary conditions for deadlock with example. How can each be prevented?
1. Mutual Exclusion: Resource held by only one process.
Prevention: Make resources shareable (not always possible).

2. Hold and Wait: Process holds resources while waiting for more.
Prevention: Request all resources at once before starting.

3. No Preemption: Resources cannot be taken away forcibly.
Prevention: Allow OS to preempt resources from waiting processes.

4. Circular Wait: Circular chain of processes waiting for each other.
Prevention: Assign numbers to resources, allow requests only in increasing order.

Note: All 4 conditions must hold simultaneously for deadlock. Eliminating even one prevents deadlock.
5M
Q17. Compare FIFO, LRU and Optimal page replacement algorithms. Which gives minimum page faults and why?
FeatureFIFOLRUOPT
ReplaceOldest pageLeast recently usedFarthest future use
Page FaultsMostMediumMinimum
PracticalYesYesNo (needs future)
Belady'sPossibleNoNo
OverheadLowMediumHigh
OPT gives minimum page faults because it always replaces the page that won't be needed for the longest time in the future — perfect decision every time. However it's not practical since future page references are not known in advance. It serves as a benchmark for other algorithms.
5M
Q18. Disk head at 50. Queue: 82, 170, 43, 140, 24, 16, 190. Apply SSTF. Show step-by-step movement and total cylinders traversed.
SSTF — Always move to nearest request: Head = 50, Queue = {82, 170, 43, 140, 24, 16, 190} Step 1: From 50 → Distances: 82(32), 170(120), 43(7), 140(90), 24(26), 16(34), 190(140) Nearest = 43 (dist=7) → Move to 43 Step 2: From 43 → Distances: 82(39), 170(127), 140(97), 24(19), 16(27), 190(147) Nearest = 24 (dist=19) → Move to 24 Step 3: From 24 → Distances: 82(58), 170(146), 140(116), 16(8), 190(166) Nearest = 16 (dist=8) → Move to 16 Step 4: From 16 → Distances: 82(66), 170(154), 140(124), 190(174) Nearest = 82 (dist=66) → Move to 82 Step 5: From 82 → Distances: 170(88), 140(58), 190(108) Nearest = 140 (dist=58) → Move to 140 Step 6: From 140 → Distances: 170(30), 190(50) Nearest = 170 (dist=30) → Move to 170 Step 7: From 170 → Distance: 190(20) Move to 190 Order: 50→43→24→16→82→140→170→190 Total = 7+19+8+66+58+30+20 = 208 cylinders