Back to LLD Concepts
Systems Programming

Concurrency & Multithreading

A complete guide from threads and race conditions through locks, concurrent data structures, async I/O, and advanced memory model topics โ€” with code in Java, Python, and JavaScript, plus interview Q&A.

Why Concurrency Matters

Modern hardware has multiple cores. A single-threaded program uses one core โ€” the rest sit idle. Concurrency lets you use all of them, increasing throughput. But concurrent programs are inherently harder to reason about: shared state + multiple threads = subtle bugs that are non-deterministic, hard to reproduce, and catastrophic in production.

The goal of this guide is to give you the mental models and practical tools to write concurrent code that is both correct (no data races, no deadlocks) and efficient (no unnecessary contention, maximum parallelism).

๐Ÿงต

Thread Safety

Shared mutable state accessed correctly by multiple threads

๐Ÿ”’

Synchronization

Locks, monitors, semaphores, condition variables

โšก

Lock-Free Algorithms

CAS-based non-blocking operations

๐Ÿ“ฆ

Concurrent Collections

Thread-safe maps, queues, lists

๐Ÿ”„

Async / Non-Blocking

Event loops, CompletableFuture, asyncio

๐Ÿง 

Memory Models

Visibility, ordering, happens-before

The cardinal rule: Shared mutable state is the root cause of most concurrency bugs. The safest concurrent code avoids shared mutable state entirely โ€” use immutable objects, message-passing, or thread-local storage. When you must share mutable state, protect every access with synchronization.

Concept 1 โ€” Beginner

Threads & Processes

"A thread is the smallest unit of execution. Multiple threads within a process share memory โ€” communication is fast but safety requires discipline."

Process vs Thread

AspectProcessThread
MemorySeparate address spaceShared with other threads
Creation costHigh (fork + exec)Low (~10-100x cheaper)
CommunicationIPC (pipes, sockets, mmap)Shared memory (fast)
Failure isolationCrash doesn't affect othersCrash kills entire process
Python GILNot affected (separate GIL)GIL limits CPU parallelism
Use forCPU-bound work, isolationI/O-bound, shared data

Thread Lifecycle

  • โ€ข NEW โ€” created, not started
  • โ€ข RUNNABLE โ€” ready to run
  • โ€ข RUNNING โ€” executing
  • โ€ข BLOCKED/WAITING โ€” suspended
  • โ€ข TERMINATED โ€” finished

When to use Threads

  • โ€ข I/O-bound tasks (network, disk)
  • โ€ข Background processing
  • โ€ข Responsive UI (event thread)
  • โ€ข Server request handling
  • โ€ข Parallel independent tasks

Platform Differences

  • โ€ข Java: OS threads, virtual threads (21+)
  • โ€ข Python: OS threads, GIL-limited for CPU
  • โ€ข Node.js: Event loop + libuv + Worker Threads
  • โ€ข Go: goroutines (green threads)
  • โ€ข Rust: OS threads, async runtime (tokio)

Code Example

// โ”€โ”€ Threads & Processes in Java โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€

// โ”€โ”€โ”€ Creating threads: 3 ways โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€

// Way 1: Extend Thread
class CounterThread extends Thread {
    private final String name;
    private final int    upto;

    CounterThread(String name, int upto) {
        super(name);
        this.name = name;
        this.upto = upto;
    }

    @Override
    public void run() {
        for (int i = 1; i <= upto; i++) {
            System.out.printf("[%s] count=%d%n", name, i);
            try { Thread.sleep(50); } catch (InterruptedException e) {
                Thread.currentThread().interrupt();
                return;
            }
        }
    }
}

// Way 2: Implement Runnable (preferred โ€” keeps class hierarchy free)
public class App {
    public static void main(String[] args) throws InterruptedException {

        // Runnable lambda
        Runnable task = () -> {
            for (int i = 0; i < 5; i++) {
                System.out.println("[Runnable] step " + i + " on " + Thread.currentThread().getName());
            }
        };

        Thread t1 = new Thread(task, "Worker-1");
        Thread t2 = new CounterThread("Counter", 5);

        t1.start();   // schedules on OS thread pool โ€” NOT t1.run()!
        t2.start();

        // main thread continues here โ€” t1, t2 run concurrently

        t1.join();    // main waits for t1 to finish
        t2.join();    // main waits for t2 to finish

        System.out.println("Both threads done.");

        // โ”€โ”€ Thread lifecycle โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
        // NEW โ†’ RUNNABLE โ†’ RUNNING โ†’ BLOCKED/WAITING/TIMED_WAITING โ†’ TERMINATED
        System.out.println("t1 state: " + t1.getState()); // TERMINATED

        // โ”€โ”€ Thread info โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
        Thread current = Thread.currentThread();
        System.out.println("Name:     " + current.getName());
        System.out.println("ID:       " + current.getId());
        System.out.println("Priority: " + current.getPriority()); // 1(MIN)โ€“10(MAX)
        System.out.println("Daemon:   " + current.isDaemon());    // daemon threads die with main
    }
}

// Way 3: ExecutorService (modern โ€” preferred for production)
import java.util.concurrent.*;

ExecutorService pool = Executors.newFixedThreadPool(4);

// Submit a Callable (returns a result)
Future<Integer> future = pool.submit(() -> {
    Thread.sleep(100);
    return 42;
});

System.out.println("Result: " + future.get()); // blocks until done

pool.shutdown();               // stop accepting new tasks
pool.awaitTermination(5, TimeUnit.SECONDS);
Concept 2 โ€” Beginner

Thread Safety & Race Conditions

"A class is thread-safe if it behaves correctly when accessed from multiple threads, regardless of scheduling or interleaving, with no additional synchronization from the caller."

โ€” Brian Goetz, Java Concurrency in Practice

What Makes Code Thread-Unsafe?

Thread safety issues arise when multiple threads access shared mutable state without adequate synchronization. Even an innocent-looking counter++ is three machine operations โ€” read, add, write โ€” and another thread can interleave between any two of them.

Race Condition

Two threads read-modify-write shared state, one update gets lost. counter++ by two threads โ†’ loses increments.

Check-Then-Act

Thread checks a condition, another thread changes it before the first acts. if (null) create โ†’ both create!

Read-Modify-Write

Not just counters โ€” getAndAdd, compareAndSwap patterns. Lazy initialisation, caching, pooling all have this issue.

Strategies for Thread Safety

1. ImmutabilityIf an object's state never changes after construction, it's thread-safe by definition. No synchronization needed. Use final fields, unmodifiable collections, record/dataclass/const.
2. Thread ConfinementOnly one thread ever accesses an object. ThreadLocal, local variables, actor model. Easiest โ€” no sharing, no problem.
3. SynchronizationProtect all accesses to shared state with the same lock. Correct but adds overhead and risk of deadlock.
4. Atomic VariablesUse hardware-supported atomic operations (CAS). AtomicInteger, AtomicReference. Lock-free, high performance for single variables.
5. Concurrent CollectionsUse thread-safe data structures (ConcurrentHashMap, BlockingQueue, CopyOnWriteArrayList) instead of synchronizing manually.

Code Example โ€” Race Condition & Fixes

// โ”€โ”€ Race Conditions โ€” the core danger of shared mutable state โ”€

// โ”€โ”€โ”€ Example: A counter incremented by 2 threads โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
public class RaceConditionDemo {

    // โŒ NOT thread-safe
    static int unsafeCounter = 0;

    public static void main(String[] args) throws InterruptedException {

        Runnable increment = () -> {
            for (int i = 0; i < 100_000; i++) {
                unsafeCounter++;
                // unsafeCounter++ is THREE operations:
                //   1. READ  current value from memory
                //   2. ADD   1
                //   3. WRITE result back to memory
                // If two threads interleave between steps 1 and 3,
                // one increment is LOST.
            }
        };

        Thread t1 = new Thread(increment);
        Thread t2 = new Thread(increment);
        t1.start(); t2.start();
        t1.join();  t2.join();

        // Expected: 200,000
        // Actual:   ~140,000โ€“190,000 โ€” varies every run!
        System.out.println("Unsafe: " + unsafeCounter);

        // โ”€โ”€ Fix 1: synchronized method โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
        SafeCounter sc1 = new SafeCounterSync();
        Thread t3 = new Thread(() -> { for (int i=0;i<100_000;i++) sc1.increment(); });
        Thread t4 = new Thread(() -> { for (int i=0;i<100_000;i++) sc1.increment(); });
        t3.start(); t4.start(); t3.join(); t4.join();
        System.out.println("Sync:   " + sc1.get()); // always 200,000 โœ“

        // โ”€โ”€ Fix 2: AtomicInteger โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
        java.util.concurrent.atomic.AtomicInteger atomic =
            new java.util.concurrent.atomic.AtomicInteger(0);
        Thread t5 = new Thread(() -> { for (int i=0;i<100_000;i++) atomic.incrementAndGet(); });
        Thread t6 = new Thread(() -> { for (int i=0;i<100_000;i++) atomic.incrementAndGet(); });
        t5.start(); t6.start(); t5.join(); t6.join();
        System.out.println("Atomic: " + atomic.get()); // always 200,000 โœ“
    }
}

interface SafeCounter { void increment(); int get(); }

// synchronized keyword: only one thread can hold the monitor at a time
class SafeCounterSync implements SafeCounter {
    private int count = 0;

    // Intrinsic lock on 'this' โ€” entire method is a critical section
    public synchronized void increment() { count++; }
    public synchronized int  get()       { return count; }
}

// โ”€โ”€ volatile โ€” guarantees visibility, NOT atomicity โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
class StopFlag {
    // Without volatile, a thread's loop may NEVER see the update
    // made by another thread (CPU cache / compiler reordering).
    private volatile boolean running = true;

    public void stop()      { running = false; }
    public boolean isRunning() { return running; }
}
Concept 3 โ€” Intermediate

Locks & Synchronization

"A critical section is code that accesses shared mutable state and must not be executed concurrently by more than one thread."

Lock Types Compared

Lock TypeBlockingReadersBest For
synchronized / LockYes1 at a timeGeneral mutual exclusion
ReentrantLockOptional1 at a timeTryLock, timed, interruptible
ReadWriteLockWrites onlyMany concurrentRead-heavy, rare writes
StampedLockOptionalMany concurrentOptimistic reads (Java 8+)
SemaphoreWhen fullN permitsConnection pools, rate limiting

โ˜ ๏ธ Deadlock โ€” Four Conditions & Prevention

All four must hold for deadlock:

  • โš ๏ธMutual Exclusion โ€” resource can't be shared
  • โš ๏ธHold and Wait โ€” holds one, waits for another
  • โš ๏ธNo Preemption โ€” can't forcibly take a resource
  • โš ๏ธCircular Wait โ€” A waits for B, B waits for A

Break any one condition to prevent it:

  • Consistent lock ordering โ€” global fixed order
  • TryLock with timeout โ€” release and retry
  • Use single lock for related resources
  • Lock-free algorithms (CAS)

Code Example

// โ”€โ”€ Locks & Synchronization in Java โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€

import java.util.concurrent.locks.*;
import java.util.concurrent.*;

// โ”€โ”€โ”€ 1. Intrinsic Lock (synchronized) โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
public class BankAccount {
    private double balance;

    public BankAccount(double balance) { this.balance = balance; }

    // Method-level lock โ€” holds 'this' monitor
    public synchronized void deposit(double amount) {
        balance += amount;
    }

    // Block-level lock โ€” finer granularity
    public void transfer(BankAccount target, double amount) {
        synchronized (this) {
            synchronized (target) {
                // โš ๏ธ Potential deadlock if two threads call
                // a.transfer(b, 100) and b.transfer(a, 100) simultaneously
                balance        -= amount;
                target.balance += amount;
            }
        }
    }

    public synchronized double getBalance() { return balance; }
}

// โ”€โ”€โ”€ 2. ReentrantLock โ€” explicit, more flexible โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
public class SafeCounter {
    private int           count = 0;
    private final ReentrantLock lock = new ReentrantLock();

    public void increment() {
        lock.lock();               // acquire โ€” must ALWAYS unlock in finally
        try {
            count++;
        } finally {
            lock.unlock();         // guaranteed release
        }
    }

    // TryLock โ€” non-blocking attempt
    public boolean tryIncrement() {
        if (lock.tryLock()) {      // returns false immediately if locked
            try { count++; return true; }
            finally { lock.unlock(); }
        }
        return false;              // didn't acquire โ€” no waiting
    }

    // Timed lock โ€” wait at most 500ms
    public boolean timedIncrement() throws InterruptedException {
        if (lock.tryLock(500, TimeUnit.MILLISECONDS)) {
            try { count++; return true; }
            finally { lock.unlock(); }
        }
        return false;
    }

    public int get() { return count; }
}

// โ”€โ”€โ”€ 3. ReadWriteLock โ€” multiple readers OR one writer โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
public class CachedData {
    private Map<String, String> cache = new HashMap<>();
    private final ReadWriteLock rwLock = new ReentrantReadWriteLock();
    private final Lock readLock  = rwLock.readLock();
    private final Lock writeLock = rwLock.writeLock();

    // Many threads can read simultaneously
    public String get(String key) {
        readLock.lock();
        try {
            return cache.get(key);
        } finally {
            readLock.unlock();
        }
    }

    // Only one writer at a time โ€” blocks all readers too
    public void put(String key, String value) {
        writeLock.lock();
        try {
            cache.put(key, value);
        } finally {
            writeLock.unlock();
        }
    }
}

// โ”€โ”€โ”€ 4. Condition Variables โ€” precise thread coordination โ”€โ”€โ”€โ”€โ”€
public class BoundedBuffer<T> {
    private final Queue<T> buffer  = new LinkedList<>();
    private final int      maxSize;
    private final Lock     lock    = new ReentrantLock();
    private final Condition notFull  = lock.newCondition();
    private final Condition notEmpty = lock.newCondition();

    public BoundedBuffer(int maxSize) { this.maxSize = maxSize; }

    public void produce(T item) throws InterruptedException {
        lock.lock();
        try {
            while (buffer.size() == maxSize)
                notFull.await();    // release lock, sleep, re-acquire when signalled
            buffer.add(item);
            System.out.println("Produced: " + item + " | size=" + buffer.size());
            notEmpty.signal();      // wake one waiting consumer
        } finally { lock.unlock(); }
    }

    public T consume() throws InterruptedException {
        lock.lock();
        try {
            while (buffer.isEmpty())
                notEmpty.await();
            T item = buffer.poll();
            System.out.println("Consumed: " + item + " | size=" + buffer.size());
            notFull.signal();
            return item;
        } finally { lock.unlock(); }
    }
}

// โ”€โ”€โ”€ 5. Deadlock โ€” how it happens and how to avoid it โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
// DEADLOCK: Thread A holds lock1 and waits for lock2.
//           Thread B holds lock2 and waits for lock1. Both wait forever.

// Prevention strategy: always acquire locks in a FIXED global order
public void safeTransfer(BankAccount from, BankAccount to, double amt) {
    // Order by identity hash โ€” consistent global ordering
    BankAccount first  = System.identityHashCode(from) < System.identityHashCode(to) ? from : to;
    BankAccount second = first == from ? to : from;
    synchronized (first) {
        synchronized (second) {
            from.withdraw(amt);
            to.deposit(amt);
        }
    }
}
Concept 4 โ€” Intermediate

Concurrent Data Structures

"Use the right data structure and half your synchronization work disappears."

Java Concurrent Data Structures

ConcurrentHashMap

Thread-safe key-value store. Multiple readers + writers. compute/merge atomic.

Much better than synchronizedMap under contention

LinkedBlockingQueue

Producer-consumer pipelines. Blocks producer when full, consumer when empty.

Good for bounded work queues

CopyOnWriteArrayList

Event listeners, config lists. Read-heavy, rare writes.

O(n) writes. Lock-free reads.

ConcurrentLinkedQueue

High-throughput non-blocking queue. No size limit.

Lock-free CAS operations

AtomicInteger/Long/Ref

Single-variable counters, flags, references. CAS-based.

Fastest thread-safe primitive

CountDownLatch/Barrier

Coordination: wait for N events, or sync N threads at a point.

One-shot (Latch) / reusable (Barrier)

Never use HashMap + Collections.synchronizedMap() when ConcurrentHashMap works. Synchronized wrappers lock the entire map for every operation โ€” poor throughput under contention. ConcurrentHashMap locks individual bins (Java 8+) and uses CAS for most operations, giving near-linear scalability.

Code Example

// โ”€โ”€ Concurrent Data Structures in Java โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€

import java.util.concurrent.*;
import java.util.concurrent.atomic.*;
import java.util.*;

public class ConcurrentDataStructuresDemo {
    public static void main(String[] args) throws Exception {

        // โ”€โ”€โ”€ 1. ConcurrentHashMap โ€” thread-safe HashMap โ”€โ”€โ”€โ”€โ”€โ”€
        // Segment-based locking (Java 7) โ†’ CAS + bin-level locking (Java 8+)
        // Much better throughput than Collections.synchronizedMap(map)
        ConcurrentHashMap<String, Integer> wordCount = new ConcurrentHashMap<>();

        // Thread-safe merge operation (compute is atomic)
        wordCount.merge("hello", 1, Integer::sum);
        wordCount.merge("hello", 1, Integer::sum);
        wordCount.computeIfAbsent("world", k -> 0);
        wordCount.compute("world", (k, v) -> v == null ? 1 : v + 1);

        System.out.println(wordCount); // {hello=2, world=1}

        // โ”€โ”€โ”€ 2. BlockingQueue โ€” producer-consumer backbone โ”€โ”€โ”€
        BlockingQueue<String> queue = new LinkedBlockingQueue<>(10); // bounded

        // Producer thread
        Thread producer = new Thread(() -> {
            String[] tasks = {"task-1", "task-2", "task-3", "POISON"};
            for (String t : tasks) {
                try {
                    queue.put(t);     // blocks if full
                    System.out.println("Produced: " + t);
                } catch (InterruptedException e) {
                    Thread.currentThread().interrupt();
                }
            }
        });

        // Consumer thread
        Thread consumer = new Thread(() -> {
            try {
                while (true) {
                    String task = queue.take(); // blocks if empty
                    if ("POISON".equals(task)) break;
                    System.out.println("Consumed: " + task);
                }
            } catch (InterruptedException e) { Thread.currentThread().interrupt(); }
        });

        producer.start(); consumer.start();
        producer.join();  consumer.join();

        // โ”€โ”€โ”€ 3. CopyOnWriteArrayList โ€” read-heavy workloads โ”€โ”€
        // Every write creates a NEW copy of the array.
        // Reads are always lock-free (see the unmodified copy).
        // Best when reads >> writes (event listeners, config lists).
        CopyOnWriteArrayList<String> listeners = new CopyOnWriteArrayList<>();
        listeners.add("listener-1");
        listeners.add("listener-2");

        // Safe to iterate while another thread modifies
        for (String l : listeners) {
            System.out.println("Notifying: " + l);
        }

        // โ”€โ”€โ”€ 4. AtomicInteger / AtomicReference / AtomicLong โ”€
        AtomicInteger counter = new AtomicInteger(0);
        counter.incrementAndGet();               // i++, thread-safe
        counter.addAndGet(5);                    // += 5, thread-safe
        counter.compareAndSet(6, 10);            // CAS: if val==6, set to 10

        // AtomicReference โ€” CAS on object references
        AtomicReference<String> ref = new AtomicReference<>("initial");
        boolean swapped = ref.compareAndSet("initial", "updated");
        System.out.println(swapped + " " + ref.get()); // true updated

        // โ”€โ”€โ”€ 5. ConcurrentLinkedQueue โ€” non-blocking queue โ”€โ”€โ”€
        // Uses lock-free CAS operations โ€” no blocking, no contention
        ConcurrentLinkedQueue<Integer> clq = new ConcurrentLinkedQueue<>();
        clq.offer(1); clq.offer(2); clq.offer(3);
        System.out.println(clq.poll()); // 1

        // โ”€โ”€โ”€ 6. CountDownLatch โ€” wait for N events โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
        CountDownLatch latch = new CountDownLatch(3);

        for (int i = 0; i < 3; i++) {
            int id = i;
            new Thread(() -> {
                System.out.println("Service-" + id + " started");
                latch.countDown(); // decrement counter
            }).start();
        }

        latch.await(); // main blocks until count reaches 0
        System.out.println("All services ready โ€” starting app");

        // โ”€โ”€โ”€ 7. CyclicBarrier โ€” all threads reach a point โ”€โ”€โ”€โ”€
        CyclicBarrier barrier = new CyclicBarrier(3, () ->
            System.out.println("All workers reached barrier โ€” proceeding"));

        for (int i = 0; i < 3; i++) {
            int id = i;
            new Thread(() -> {
                System.out.println("Worker-" + id + " at barrier");
                try { barrier.await(); } catch (Exception e) {}
                System.out.println("Worker-" + id + " proceeding");
            }).start();
        }
    }
}
Concept 5 โ€” Intermediate

Concurrency Patterns

Producer-Consumer

Producers put items into a bounded buffer; consumers take them. Decouples production rate from consumption rate. Implemented via BlockingQueue / asyncio.Queue.

Thread Pool

Reuse a fixed set of threads instead of creating/destroying per task. Bounds memory use and creation overhead. ExecutorService, ThreadPoolExecutor.

Fork/Join

Recursively split large tasks into smaller ones (fork), compute in parallel, combine results (join). Best for divide-and-conquer algorithms.

Active Object

Method calls are turned into message objects queued to a dedicated worker thread. The caller gets a Future. Decouples call from execution.

Circuit Breaker

Track failures to an external service. After N failures, open the circuit (fail fast). After timeout, try again (half-open). Prevents cascade failures.

Pipeline

Chain stages connected by queues. Each stage is a worker thread or coroutine. Data flows through: parse โ†’ validate โ†’ enrich โ†’ persist.

Code Example

// โ”€โ”€ Concurrency Patterns in Java โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€

import java.util.concurrent.*;
import java.util.concurrent.atomic.*;

// โ”€โ”€โ”€ 1. Producer-Consumer โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
// Already shown with BoundedBuffer. Here: using BlockingQueue directly.
public class OrderProcessingSystem {
    private static final BlockingQueue<Order> orders = new LinkedBlockingQueue<>(50);
    private static final AtomicBoolean running = new AtomicBoolean(true);

    // Multiple producers โ€” web handlers push orders
    static class OrderReceiver implements Runnable {
        public void run() {
            while (running.get()) {
                Order order = receiveFromNetwork(); // blocks on I/O
                try { orders.put(order); }
                catch (InterruptedException e) { Thread.currentThread().interrupt(); }
            }
        }
        Order receiveFromNetwork() { return new Order(); }
    }

    // Multiple consumers โ€” worker threads process orders
    static class OrderProcessor implements Runnable {
        public void run() {
            while (running.get() || !orders.isEmpty()) {
                try {
                    Order order = orders.poll(100, TimeUnit.MILLISECONDS);
                    if (order != null) process(order);
                } catch (InterruptedException e) { Thread.currentThread().interrupt(); }
            }
        }
        void process(Order o) { System.out.println("Processing " + o); }
    }
}

// โ”€โ”€โ”€ 2. Thread Pool โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
// Creating a thread per request is expensive. Pools reuse threads.
public class WebServer {
    // Executors.newFixedThreadPool(n): n threads, unbounded queue
    // Executors.newCachedThreadPool(): grows/shrinks as needed
    // Executors.newScheduledThreadPool(n): for scheduled tasks

    private final ExecutorService pool = new ThreadPoolExecutor(
        4,            // corePoolSize
        16,           // maximumPoolSize
        60L,          // keepAliveTime
        TimeUnit.SECONDS,
        new LinkedBlockingQueue<>(100),       // work queue, bounded!
        new ThreadPoolExecutor.CallerRunsPolicy() // rejection policy
    );

    public void handleRequest(Runnable request) {
        pool.submit(request);
    }

    public void shutdown() {
        pool.shutdown();
        try { pool.awaitTermination(30, TimeUnit.SECONDS); }
        catch (InterruptedException e) { pool.shutdownNow(); }
    }
}

// โ”€โ”€โ”€ 3. Fork/Join โ€” divide and conquer parallelism โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
import java.util.concurrent.RecursiveTask;

public class ParallelSum extends RecursiveTask<Long> {
    private final long[] array;
    private final int    start, end;
    private static final int THRESHOLD = 10_000;

    public ParallelSum(long[] array, int start, int end) {
        this.array = array; this.start = start; this.end = end;
    }

    @Override
    protected Long compute() {
        if (end - start <= THRESHOLD) {
            // Small enough โ€” compute directly
            long sum = 0;
            for (int i = start; i < end; i++) sum += array[i];
            return sum;
        }
        int mid = (start + end) / 2;
        ParallelSum left  = new ParallelSum(array, start, mid);
        ParallelSum right = new ParallelSum(array, mid, end);
        left.fork();                  // submit left to pool
        long rightResult = right.compute(); // compute right directly
        long leftResult  = left.join();     // wait for left
        return leftResult + rightResult;
    }
}

ForkJoinPool pool = ForkJoinPool.commonPool();
long[] data = new long[10_000_000];
Arrays.fill(data, 1L);
long total = pool.invoke(new ParallelSum(data, 0, data.length));
System.out.println("Sum: " + total); // 10,000,000

// โ”€โ”€โ”€ 4. CompletableFuture โ€” async pipelines โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
CompletableFuture<String> pipeline =
    CompletableFuture
        .supplyAsync(() -> fetchUser(1))            // async
        .thenApplyAsync(user -> fetchOrders(user))  // chain
        .thenApplyAsync(orders -> formatReport(orders))
        .exceptionally(e -> "Error: " + e.getMessage());

String result = pipeline.get(5, TimeUnit.SECONDS);
Concept 6 โ€” Intermediate

Async & Non-Blocking I/O

"Non-blocking I/O lets one thread manage thousands of concurrent connections by never waiting โ€” it starts an I/O operation and processes other work until the OS signals completion."

Blocking vs Non-Blocking vs Async

ModelThread while waiting?Good forExample
Blocking I/OBlocked (idle)Simple code, few connectionssocket.read(), file.read()
Non-blocking I/OReturns immediatelyPolling, high conn countO_NONBLOCK, NIO selector
Async I/O (callback)Free โ€” callback fires on completeHigh concurrency, I/O boundNode.js fs.readFile
Async I/O (await)Coroutine suspended, not blockedReadable async codeasyncio, CompletableFuture

โœ… Use Async / Non-Blocking when:

  • โœ“ High number of concurrent I/O operations
  • โœ“ Long-running waits (network, disk, DB)
  • โœ“ Each operation is I/O-bound not CPU-bound
  • โœ“ You need high throughput with few threads

โŒ Avoid Async when:

  • โœ— Work is CPU-bound (use threads/processes)
  • โœ— Sequential logic is simpler and latency is fine
  • โœ— Operations are interdependent and complex to compose
  • โœ— Debugging async stack traces is already painful

Code Example

// โ”€โ”€ Async & Non-Blocking I/O in Java โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€

import java.util.concurrent.*;
import java.util.concurrent.CompletableFuture;
import java.net.http.*;
import java.net.URI;

// โ”€โ”€โ”€ CompletableFuture โ€” Java's async pipeline โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
public class AsyncDemo {

    public static void main(String[] args) throws Exception {

        // โ”€โ”€ Basic async task โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
        CompletableFuture<String> future = CompletableFuture
            .supplyAsync(() -> {
                simulateDelay(200);
                return "user-data";
            });                          // runs on ForkJoinPool.commonPool()

        // Non-blocking โ€” do other work while waiting
        System.out.println("Main thread doing other work...");

        String result = future.get();    // blocks only when we need the result
        System.out.println("Got: " + result);

        // โ”€โ”€ Chaining: thenApply, thenCompose, thenAccept โ”€โ”€โ”€โ”€โ”€โ”€
        CompletableFuture<String> pipeline = CompletableFuture
            .supplyAsync(() -> fetchUserId())           // returns int
            .thenApplyAsync(id -> fetchUser(id))        // int โ†’ User (async)
            .thenApplyAsync(user -> fetchOrders(user))  // User โ†’ List<Order>
            .thenApplyAsync(orders -> formatSummary(orders)) // โ†’ String
            .exceptionally(ex -> "Error: " + ex.getMessage());

        // โ”€โ”€ Parallel composition โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
        CompletableFuture<String> userFuture   = CompletableFuture.supplyAsync(() -> fetchUser(1));
        CompletableFuture<String> configFuture = CompletableFuture.supplyAsync(() -> fetchConfig());

        // Wait for BOTH
        CompletableFuture<String> combined = userFuture
            .thenCombine(configFuture, (user, cfg) -> user + " + " + cfg);

        // Wait for EITHER (race)
        CompletableFuture<String> either = userFuture.applyToEither(configFuture, s -> s);

        // Wait for ALL N futures
        CompletableFuture.allOf(userFuture, configFuture).join();

        // โ”€โ”€ Non-blocking HTTP client (Java 11+) โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
        HttpClient client  = HttpClient.newHttpClient();
        HttpRequest request = HttpRequest.newBuilder()
            .uri(URI.create("https://api.example.com/users/1"))
            .header("Accept", "application/json")
            .GET()
            .build();

        // Completely non-blocking โ€” no thread blocked waiting
        CompletableFuture<String> httpFuture = client
            .sendAsync(request, HttpResponse.BodyHandlers.ofString())
            .thenApply(HttpResponse::body);

        System.out.println(httpFuture.get(5, TimeUnit.SECONDS));

        // โ”€โ”€ Virtual Threads (Java 21 โ€” Project Loom) โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
        // Lightweight threads scheduled by JVM, not OS.
        // Can create MILLIONS of them โ€” no memory per OS thread.
        try (ExecutorService vt = Executors.newVirtualThreadPerTaskExecutor()) {
            IntStream.range(0, 1_000_000).forEach(i ->
                vt.submit(() -> {
                    Thread.sleep(100);   // blocks virtual thread, not OS thread
                    return i * 2;
                })
            );
        } // auto-shutdown โ€” waits for all

        // One virtual thread per request โ€” simple synchronous-style code
        // at the throughput of non-blocking I/O!
    }

    static void simulateDelay(long ms) {
        try { Thread.sleep(ms); } catch (InterruptedException e) {
            Thread.currentThread().interrupt();
        }
    }
    static int    fetchUserId()    { simulateDelay(100); return 1; }
    static String fetchUser(int i) { simulateDelay(100); return "user-" + i; }
    static String fetchConfig()    { simulateDelay(50);  return "config"; }
    static String fetchOrders(String u) { return "orders-of-" + u; }
    static String formatSummary(String o) { return "summary: " + o; }
}
Concept 7 โ€” Advanced

Advanced โ€” Memory Models & Internals

Key Advanced Concepts

Java Memory Model (JMM)

Defines when writes by one thread are guaranteed visible to another. Without synchronization, the JIT compiler and CPU can reorder operations for performance. volatile and locks establish happens-before ordering.

CPU Cache Coherence

Modern CPUs have L1/L2/L3 caches per core. A write on core 1 may not be immediately visible on core 2 without a memory barrier. volatile forces cache flush.

Lock-Free CAS

Compare-And-Swap: read value, compute new value, atomically swap only if current matches expected. If another thread changed it, retry. Foundation of all Java Atomic classes.

False Sharing

CPU cache lines are 64 bytes. If two threads modify independent variables that share a cache line, they invalidate each other's cache unnecessarily. @Contended annotation pads fields.

Virtual Threads (Java 21)

Lightweight JVM-managed threads. Create millions โ€” OS can't handle that many OS threads. Synchronous-style code with non-blocking throughput. Revolutionary for I/O-bound servers.

Python contextvars

Python 3.7+ ContextVar provides async-safe per-context storage. Unlike threading.local(), works correctly with async code where multiple coroutines interleave on one thread.

happens-before Rules (Java)

Program orderWithin a single thread, every action happens-before the next action in program order
Thread startEverything before thread.start() happens-before anything inside the thread's run()
Thread joinEverything inside run() happens-before anything after thread.join() in the calling thread
Monitor unlockUnlocking a monitor happens-before any subsequent lock of the same monitor
volatile writeA volatile write happens-before any subsequent volatile read of the same variable
TransitivityIf A happens-before B and B happens-before C, then A happens-before C

Code Example

// โ”€โ”€ Advanced โ€” Java Memory Model & happens-before โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€

import java.util.concurrent.atomic.*;
import java.util.concurrent.locks.*;

// โ”€โ”€โ”€ 1. Java Memory Model (JMM) โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
// Each thread has its own working memory (CPU cache).
// The JMM defines WHEN a write by one thread becomes VISIBLE to another.

// โ”€โ”€โ”€ 2. volatile โ€” guarantees visibility + ordering โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
class StopFlag {
    // WITHOUT volatile: JIT may cache 'running' in a register.
    // The worker thread may NEVER see running=false.
    private volatile boolean running = true;  // โ† volatile fixes this

    public void stop()         { running = false; }
    public boolean isRunning() { return running; }
}

// volatile DOES NOT guarantee atomicity for compound operations!
class BadVolatileCounter {
    private volatile int counter = 0;
    // counter++ reads, increments, writes โ€” NOT atomic even with volatile!
    public void increment() { counter++; } // โ† STILL a race condition
}

// Use AtomicInteger for atomic compound operations
class GoodAtomicCounter {
    private final AtomicInteger counter = new AtomicInteger(0);
    public void increment()    { counter.incrementAndGet(); }  // atomic
    public int  get()          { return counter.get(); }
}

// โ”€โ”€โ”€ 3. happens-before relationship โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
// A happens-before B means: all writes in A are visible in B.
//
// Rules that establish happens-before:
//   โ€ข Thread start: everything before start() is visible inside run()
//   โ€ข Thread join: everything in run() is visible after join() returns
//   โ€ข synchronized: unlock of monitor X happens-before lock of X
//   โ€ข volatile write happens-before subsequent volatile read of same var
//   โ€ข Thread.interrupt() happens-before interrupted exception caught

// โ”€โ”€โ”€ 4. Double-Checked Locking โ€” common pattern, subtle bug โ”€โ”€
// WITHOUT volatile โ€” broken in Java!
class SingletonBroken {
    private static SingletonBroken instance;
    public static SingletonBroken getInstance() {
        if (instance == null) {
            synchronized (SingletonBroken.class) {
                if (instance == null)
                    instance = new SingletonBroken();
                    // PROBLEM: object creation is 3 steps:
                    // 1. Allocate memory
                    // 2. Call constructor
                    // 3. Assign to instance
                    // JVM can reorder steps 2 and 3!
                    // Another thread sees instance != null but uninitialized object
            }
        }
        return instance;
    }
}

// CORRECT double-checked locking โ€” volatile prevents reordering
class SingletonCorrect {
    private static volatile SingletonCorrect instance; // โ† volatile!
    public static SingletonCorrect getInstance() {
        if (instance == null) {
            synchronized (SingletonCorrect.class) {
                if (instance == null)
                    instance = new SingletonCorrect(); // safe with volatile
            }
        }
        return instance;
    }
}

// BEST: Initialisation-on-Demand Holder โ€” lazy, thread-safe, no sync needed
class SingletonBest {
    private SingletonBest() {}
    private static class Holder {
        // Class loading is atomic โ€” JVM guarantees this is initialised once
        static final SingletonBest INSTANCE = new SingletonBest();
    }
    public static SingletonBest getInstance() { return Holder.INSTANCE; }
}

// โ”€โ”€โ”€ 5. StampedLock โ€” optimistic reads โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
// Java 8+ โ€” most optimistic read mode: no blocking at all
StampedLock sl = new StampedLock();

// Optimistic read: no lock taken โ€” just a stamp
long stamp = sl.tryOptimisticRead();
double balance = this.balance; // read WITHOUT lock
if (!sl.validate(stamp)) {      // if a write happened since our read...
    stamp   = sl.readLock();    // fall back to a real read lock
    try { balance = this.balance; }
    finally { sl.unlockRead(stamp); }
}
Section 8 โ€” All Levels

Interview Q&A

"13 hand-picked questions across Beginner, Intermediate, and Advanced levels โ€” with detailed, production-grade answers."

13 questions

Concurrency At a Glance

ProblemRoot Cause
Race ConditionShared mutable state, unprotected compound ops
DeadlockCircular lock acquisition order
LivelockThreads repeatedly yield to each other
StarvationSome threads never get scheduled
Visibility bugCPU cache / compiler reordering
Missed notificationSignal sent before thread waits
Thread leakThreads not properly terminated