Home/Blog/Operating Systems for Interviews: Every Concept With the Exact Answer Format Interviewers Expect
InterviewCareerPlacementsSystem Design

Operating Systems for Interviews: Every Concept With the Exact Answer Format Interviewers Expect

OS questions appear in 70% of product company interviews in India. Most students either memorise without understanding or skip OS entirely. This guide gives you crisp, interview-ready explanations for every concept that gets asked.

S
SCS TeamΒ·28 February 2026Β·12 min read

Operating Systems is the most consistently tested theory subject in Indian tech interviews. Freshworks, Zoho, Oracle, and virtually every MNC will ask you at least 3-5 OS questions in a technical round.

The frustrating part: most OS textbooks are written for depth, not for interview performance. This guide is written in the opposite direction β€” every explanation ends with the exact words you should say in an interview.


1. Process vs Thread

This is the single most asked OS question. Get this exactly right.

Process: An independent program in execution. Has its own memory space (code, stack, heap, data). Creating a new process duplicates all of this β€” expensive.

Thread: A unit of execution within a process. Multiple threads share the process's memory space (heap, code, data) but each has its own stack and registers β€” lightweight.

Interview answer format:

"A process is an independent program instance with its own isolated memory space. A thread is a lightweight unit of execution that lives inside a process and shares the process's memory. Threads are cheaper to create and communicate faster (shared memory), but a bug in one thread can corrupt memory for all threads in the process. Processes are isolated β€” a crash in one doesn't affect others."

When the follow-up comes ("why use threads over processes?"):

"Threads are used when tasks need to share data frequently β€” like a web server handling multiple requests to the same in-memory cache. Processes are used when isolation matters β€” like a browser running each tab in its own process so a crashed tab doesn't kill the browser."


2. Process States and Lifecycle

A process moves through these states:

New β†’ Ready β†’ Running β†’ Terminated
                ↕
             Waiting (blocked on I/O)
  • New: Process is being created
  • Ready: In memory, waiting for CPU
  • Running: Currently executing on CPU
  • Waiting/Blocked: Waiting for I/O or an event (e.g., file read)
  • Terminated: Finished execution

Context switch: When the CPU switches from one process to another. OS saves the current process's state (registers, program counter, stack pointer) in a PCB (Process Control Block) and loads the next process's state. This is pure overhead β€” no useful work happens during a context switch.


3. CPU Scheduling Algorithms

Interviewers love asking you to compare these. Know the trade-offs, not just the names.

FCFS (First Come First Served)

Simple. Processes run in order of arrival. Problem: convoy effect β€” one long process blocks all short processes behind it.

SJF (Shortest Job First)

Run the shortest remaining job next. Optimal average waiting time. Problem: starvation β€” long jobs may never run if short jobs keep arriving.

Round Robin

Each process gets a fixed time slice (quantum) β€” typically 10-100ms. After the quantum, it goes back to the ready queue. Used in most real operating systems. Good for interactive systems (every process gets CPU regularly).

Priority Scheduling

Each process has a priority. Higher priority runs first. Problem: starvation of low-priority processes β€” solved by aging (gradually increase priority of waiting processes).

Interview comparison:

"Round Robin is used in most modern OSes because it gives each process a fair share of CPU time, which keeps interactive applications responsive. SJF has the best average waiting time theoretically but requires knowing job length in advance β€” impractical. FCFS is simple but the convoy effect makes it poor for mixed workloads."


4. Deadlock

The four conditions (all four must hold simultaneously for deadlock to occur β€” Coffman conditions):

  1. Mutual Exclusion: Resources can't be shared (only one process at a time)
  2. Hold and Wait: A process holds resources while waiting for more
  3. No Preemption: Resources can't be forcibly taken away
  4. Circular Wait: Process A waits for B, B waits for C, C waits for A

Classic example: Process P1 holds Resource R1, wants R2. Process P2 holds R2, wants R1. Neither can proceed.

Prevention strategies (break one condition):

  • Break Hold and Wait: request all resources at once
  • Break Circular Wait: impose a global ordering on resources; always request in order
  • Allow Preemption: if a process can't get all resources, release what it has

Deadlock avoidance: Banker's Algorithm β€” before granting a resource, check if the system would still be in a "safe state" (every process can eventually complete). Used in real-time systems.

Deadlock detection + recovery: Let deadlocks happen, detect them (cycle in resource allocation graph), recover by killing processes or preempting resources.

Interview answer:

"Deadlock occurs when four conditions hold simultaneously: mutual exclusion, hold and wait, no preemption, and circular wait. The most practical prevention is breaking circular wait by imposing a resource ordering β€” if all threads always acquire lock A before lock B, a circular wait on A and B becomes impossible."


5. Memory Management: Paging vs Segmentation

Paging

Physical memory is divided into fixed-size blocks called frames. Logical memory is divided into fixed-size blocks called pages. A page table maps logical page numbers to physical frame numbers.

Advantage: No external fragmentation (frames are fixed-size). Easy to allocate.

Problem: Internal fragmentation (last page may not be fully used). Page tables can be large.

Segmentation

Memory is divided into variable-size segments corresponding to logical units (code segment, stack segment, data segment). A segment table maps segment name + offset to physical address.

Advantage: Matches programmer's view of memory. Segments can grow/shrink independently.

Problem: External fragmentation β€” variable-size allocations leave gaps.

Virtual Memory

Allows programs to use more memory than physically available. Pages not currently needed are stored on disk (swap space). When a needed page isn't in RAM β€” page fault β€” OS loads it from disk.

Page replacement algorithms:

  • LRU (Least Recently Used): Replace the page not used for the longest time β€” practical, good performance
  • FIFO: Replace oldest loaded page β€” simple, but suffers from Belady's anomaly
  • Optimal: Replace page not needed for the longest future time β€” theoretical best, not implementable

6. Synchronisation: Mutex, Semaphore, Monitor

Mutex (Mutual Exclusion lock): Binary lock. Only the thread that locked it can unlock it. Used to protect a critical section.

mutex.lock()
// critical section β€” only one thread here at a time
mutex.unlock()

Semaphore: A counter. wait() (P) decrements β€” if 0, blocks. signal() (V) increments β€” unblocks a waiting thread.

  • Binary semaphore: Like mutex but any thread can signal (release)
  • Counting semaphore: Allows N concurrent accesses (e.g., N database connections)

Producer-Consumer problem (classic):

empty = Semaphore(BUFFER_SIZE)  # initially N slots free
full = Semaphore(0)              # initially 0 items
mutex = Mutex()

Producer:
  wait(empty)     # wait for empty slot
  mutex.lock()
  add_item()
  mutex.unlock()
  signal(full)    # signal that a new item is available

Consumer:
  wait(full)      # wait for an item
  mutex.lock()
  remove_item()
  mutex.unlock()
  signal(empty)   # signal that a slot is now empty

Race condition: When outcome depends on the order of concurrent operations. Example: two threads both read count=5, both increment, both write count=6. Result should be 7 but is 6. Fixed with mutex.


7. Inter-Process Communication (IPC)

How do separate processes communicate?

Pipes: Unidirectional byte stream. ls | grep .py β€” shell pipes.

Message Queues: OS-managed queue. Processes send/receive messages without needing to be running simultaneously.

Shared Memory: Fastest IPC β€” processes map the same physical memory region. Requires synchronisation (mutex/semaphore) to prevent race conditions.

Sockets: Communication over a network (or locally). Used when processes may be on different machines.

Signals: Asynchronous notification. SIGKILL, SIGTERM, SIGSEGV. Interrupts the receiving process immediately.


8. System Calls

The interface between user programs and the OS kernel. When a program needs privileged operations (disk I/O, network, fork a process), it makes a system call β€” switches from user mode to kernel mode.

Common system calls:

  • fork() β€” create a new process (copy of current)
  • exec() β€” replace current process with new program
  • open(), read(), write(), close() β€” file operations
  • mmap() β€” map files or devices into memory
  • socket(), connect(), accept() β€” network

Why not just call the kernel directly? User programs run in an unprivileged mode. Direct hardware access would let any program corrupt any other program's memory. System calls provide a controlled gateway with validation.


9. Thrashing

When a system spends more time swapping pages between RAM and disk than actually executing processes.

Cause: Too many processes competing for too little RAM. Each process gets fewer pages than it needs, causing constant page faults.

Solution: Reduce the degree of multiprogramming (run fewer processes), add more RAM, or use the working set model β€” ensure each process has its working set (recently used pages) in memory before running.


10. The Three Questions Always Asked

Every OS interview at Indian companies ends with some version of these three:

"What happens when you run a program?"

"The OS creates a new process, allocates memory (code, stack, heap), loads the program binary into memory, sets up file descriptors (stdin, stdout, stderr), and adds the process to the ready queue. The scheduler picks it up and it starts executing. When it calls the OS for I/O, it blocks and goes to the waiting state until the I/O completes."

"What is a zombie process?"

"A process that has finished execution but still has an entry in the process table because its parent hasn't called wait() to collect the exit status. It's not using CPU or memory, just an entry in the process table. Fixed by the parent calling wait(), or by reparenting the zombie to init which periodically reaps zombie children."

"Difference between mutex and semaphore?"

"A mutex is a locking mechanism β€” only the thread that locked it can unlock it, and it's binary (locked/unlocked). A semaphore is a signalling mechanism with a counter β€” any thread can signal it, and it can allow N concurrent accesses if initialised to N. Use mutex for exclusive access to a resource; use semaphore for controlling access count or signalling between threads."

Quick preparation: Use the AI Tutor to quiz yourself on OS concepts. Ask it: "Quiz me on OS scheduling algorithms β€” ask one question at a time and tell me when I'm wrong."

Ready to practice what you just learned?

Apply these concepts with AI-powered tools built for CS students.