A complete guide from threads and race conditions through locks, concurrent data structures, async I/O, and advanced memory model topics โ with code in Java, Python, and JavaScript, plus interview Q&A.
Modern hardware has multiple cores. A single-threaded program uses one core โ the rest sit idle. Concurrency lets you use all of them, increasing throughput. But concurrent programs are inherently harder to reason about: shared state + multiple threads = subtle bugs that are non-deterministic, hard to reproduce, and catastrophic in production.
The goal of this guide is to give you the mental models and practical tools to write concurrent code that is both correct (no data races, no deadlocks) and efficient (no unnecessary contention, maximum parallelism).
Thread Safety
Shared mutable state accessed correctly by multiple threads
Synchronization
Locks, monitors, semaphores, condition variables
Lock-Free Algorithms
CAS-based non-blocking operations
Concurrent Collections
Thread-safe maps, queues, lists
Async / Non-Blocking
Event loops, CompletableFuture, asyncio
Memory Models
Visibility, ordering, happens-before
The cardinal rule: Shared mutable state is the root cause of most concurrency bugs. The safest concurrent code avoids shared mutable state entirely โ use immutable objects, message-passing, or thread-local storage. When you must share mutable state, protect every access with synchronization.
"A thread is the smallest unit of execution. Multiple threads within a process share memory โ communication is fast but safety requires discipline."
| Aspect | Process | Thread |
|---|---|---|
| Memory | Separate address space | Shared with other threads |
| Creation cost | High (fork + exec) | Low (~10-100x cheaper) |
| Communication | IPC (pipes, sockets, mmap) | Shared memory (fast) |
| Failure isolation | Crash doesn't affect others | Crash kills entire process |
| Python GIL | Not affected (separate GIL) | GIL limits CPU parallelism |
| Use for | CPU-bound work, isolation | I/O-bound, shared data |
// โโ Threads & Processes in Java โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
// โโโ Creating threads: 3 ways โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
// Way 1: Extend Thread
class CounterThread extends Thread {
private final String name;
private final int upto;
CounterThread(String name, int upto) {
super(name);
this.name = name;
this.upto = upto;
}
@Override
public void run() {
for (int i = 1; i <= upto; i++) {
System.out.printf("[%s] count=%d%n", name, i);
try { Thread.sleep(50); } catch (InterruptedException e) {
Thread.currentThread().interrupt();
return;
}
}
}
}
// Way 2: Implement Runnable (preferred โ keeps class hierarchy free)
public class App {
public static void main(String[] args) throws InterruptedException {
// Runnable lambda
Runnable task = () -> {
for (int i = 0; i < 5; i++) {
System.out.println("[Runnable] step " + i + " on " + Thread.currentThread().getName());
}
};
Thread t1 = new Thread(task, "Worker-1");
Thread t2 = new CounterThread("Counter", 5);
t1.start(); // schedules on OS thread pool โ NOT t1.run()!
t2.start();
// main thread continues here โ t1, t2 run concurrently
t1.join(); // main waits for t1 to finish
t2.join(); // main waits for t2 to finish
System.out.println("Both threads done.");
// โโ Thread lifecycle โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
// NEW โ RUNNABLE โ RUNNING โ BLOCKED/WAITING/TIMED_WAITING โ TERMINATED
System.out.println("t1 state: " + t1.getState()); // TERMINATED
// โโ Thread info โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Thread current = Thread.currentThread();
System.out.println("Name: " + current.getName());
System.out.println("ID: " + current.getId());
System.out.println("Priority: " + current.getPriority()); // 1(MIN)โ10(MAX)
System.out.println("Daemon: " + current.isDaemon()); // daemon threads die with main
}
}
// Way 3: ExecutorService (modern โ preferred for production)
import java.util.concurrent.*;
ExecutorService pool = Executors.newFixedThreadPool(4);
// Submit a Callable (returns a result)
Future<Integer> future = pool.submit(() -> {
Thread.sleep(100);
return 42;
});
System.out.println("Result: " + future.get()); // blocks until done
pool.shutdown(); // stop accepting new tasks
pool.awaitTermination(5, TimeUnit.SECONDS);"A class is thread-safe if it behaves correctly when accessed from multiple threads, regardless of scheduling or interleaving, with no additional synchronization from the caller."
โ Brian Goetz, Java Concurrency in Practice
Thread safety issues arise when multiple threads access shared mutable state without adequate synchronization. Even an innocent-looking counter++ is three machine operations โ read, add, write โ and another thread can interleave between any two of them.
Two threads read-modify-write shared state, one update gets lost. counter++ by two threads โ loses increments.
Thread checks a condition, another thread changes it before the first acts. if (null) create โ both create!
Not just counters โ getAndAdd, compareAndSwap patterns. Lazy initialisation, caching, pooling all have this issue.
// โโ Race Conditions โ the core danger of shared mutable state โ
// โโโ Example: A counter incremented by 2 threads โโโโโโโโโโโโโโ
public class RaceConditionDemo {
// โ NOT thread-safe
static int unsafeCounter = 0;
public static void main(String[] args) throws InterruptedException {
Runnable increment = () -> {
for (int i = 0; i < 100_000; i++) {
unsafeCounter++;
// unsafeCounter++ is THREE operations:
// 1. READ current value from memory
// 2. ADD 1
// 3. WRITE result back to memory
// If two threads interleave between steps 1 and 3,
// one increment is LOST.
}
};
Thread t1 = new Thread(increment);
Thread t2 = new Thread(increment);
t1.start(); t2.start();
t1.join(); t2.join();
// Expected: 200,000
// Actual: ~140,000โ190,000 โ varies every run!
System.out.println("Unsafe: " + unsafeCounter);
// โโ Fix 1: synchronized method โโโโโโโโโโโโโโโโโโโโโโโโโ
SafeCounter sc1 = new SafeCounterSync();
Thread t3 = new Thread(() -> { for (int i=0;i<100_000;i++) sc1.increment(); });
Thread t4 = new Thread(() -> { for (int i=0;i<100_000;i++) sc1.increment(); });
t3.start(); t4.start(); t3.join(); t4.join();
System.out.println("Sync: " + sc1.get()); // always 200,000 โ
// โโ Fix 2: AtomicInteger โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
java.util.concurrent.atomic.AtomicInteger atomic =
new java.util.concurrent.atomic.AtomicInteger(0);
Thread t5 = new Thread(() -> { for (int i=0;i<100_000;i++) atomic.incrementAndGet(); });
Thread t6 = new Thread(() -> { for (int i=0;i<100_000;i++) atomic.incrementAndGet(); });
t5.start(); t6.start(); t5.join(); t6.join();
System.out.println("Atomic: " + atomic.get()); // always 200,000 โ
}
}
interface SafeCounter { void increment(); int get(); }
// synchronized keyword: only one thread can hold the monitor at a time
class SafeCounterSync implements SafeCounter {
private int count = 0;
// Intrinsic lock on 'this' โ entire method is a critical section
public synchronized void increment() { count++; }
public synchronized int get() { return count; }
}
// โโ volatile โ guarantees visibility, NOT atomicity โโโโโโโโโโโ
class StopFlag {
// Without volatile, a thread's loop may NEVER see the update
// made by another thread (CPU cache / compiler reordering).
private volatile boolean running = true;
public void stop() { running = false; }
public boolean isRunning() { return running; }
}"A critical section is code that accesses shared mutable state and must not be executed concurrently by more than one thread."
| Lock Type | Blocking | Readers | Best For |
|---|---|---|---|
| synchronized / Lock | Yes | 1 at a time | General mutual exclusion |
| ReentrantLock | Optional | 1 at a time | TryLock, timed, interruptible |
| ReadWriteLock | Writes only | Many concurrent | Read-heavy, rare writes |
| StampedLock | Optional | Many concurrent | Optimistic reads (Java 8+) |
| Semaphore | When full | N permits | Connection pools, rate limiting |
All four must hold for deadlock:
Break any one condition to prevent it:
// โโ Locks & Synchronization in Java โโโโโโโโโโโโโโโโโโโโโโโโโ
import java.util.concurrent.locks.*;
import java.util.concurrent.*;
// โโโ 1. Intrinsic Lock (synchronized) โโโโโโโโโโโโโโโโโโโโโโโโ
public class BankAccount {
private double balance;
public BankAccount(double balance) { this.balance = balance; }
// Method-level lock โ holds 'this' monitor
public synchronized void deposit(double amount) {
balance += amount;
}
// Block-level lock โ finer granularity
public void transfer(BankAccount target, double amount) {
synchronized (this) {
synchronized (target) {
// โ ๏ธ Potential deadlock if two threads call
// a.transfer(b, 100) and b.transfer(a, 100) simultaneously
balance -= amount;
target.balance += amount;
}
}
}
public synchronized double getBalance() { return balance; }
}
// โโโ 2. ReentrantLock โ explicit, more flexible โโโโโโโโโโโโโโ
public class SafeCounter {
private int count = 0;
private final ReentrantLock lock = new ReentrantLock();
public void increment() {
lock.lock(); // acquire โ must ALWAYS unlock in finally
try {
count++;
} finally {
lock.unlock(); // guaranteed release
}
}
// TryLock โ non-blocking attempt
public boolean tryIncrement() {
if (lock.tryLock()) { // returns false immediately if locked
try { count++; return true; }
finally { lock.unlock(); }
}
return false; // didn't acquire โ no waiting
}
// Timed lock โ wait at most 500ms
public boolean timedIncrement() throws InterruptedException {
if (lock.tryLock(500, TimeUnit.MILLISECONDS)) {
try { count++; return true; }
finally { lock.unlock(); }
}
return false;
}
public int get() { return count; }
}
// โโโ 3. ReadWriteLock โ multiple readers OR one writer โโโโโโโโ
public class CachedData {
private Map<String, String> cache = new HashMap<>();
private final ReadWriteLock rwLock = new ReentrantReadWriteLock();
private final Lock readLock = rwLock.readLock();
private final Lock writeLock = rwLock.writeLock();
// Many threads can read simultaneously
public String get(String key) {
readLock.lock();
try {
return cache.get(key);
} finally {
readLock.unlock();
}
}
// Only one writer at a time โ blocks all readers too
public void put(String key, String value) {
writeLock.lock();
try {
cache.put(key, value);
} finally {
writeLock.unlock();
}
}
}
// โโโ 4. Condition Variables โ precise thread coordination โโโโโ
public class BoundedBuffer<T> {
private final Queue<T> buffer = new LinkedList<>();
private final int maxSize;
private final Lock lock = new ReentrantLock();
private final Condition notFull = lock.newCondition();
private final Condition notEmpty = lock.newCondition();
public BoundedBuffer(int maxSize) { this.maxSize = maxSize; }
public void produce(T item) throws InterruptedException {
lock.lock();
try {
while (buffer.size() == maxSize)
notFull.await(); // release lock, sleep, re-acquire when signalled
buffer.add(item);
System.out.println("Produced: " + item + " | size=" + buffer.size());
notEmpty.signal(); // wake one waiting consumer
} finally { lock.unlock(); }
}
public T consume() throws InterruptedException {
lock.lock();
try {
while (buffer.isEmpty())
notEmpty.await();
T item = buffer.poll();
System.out.println("Consumed: " + item + " | size=" + buffer.size());
notFull.signal();
return item;
} finally { lock.unlock(); }
}
}
// โโโ 5. Deadlock โ how it happens and how to avoid it โโโโโโโโโ
// DEADLOCK: Thread A holds lock1 and waits for lock2.
// Thread B holds lock2 and waits for lock1. Both wait forever.
// Prevention strategy: always acquire locks in a FIXED global order
public void safeTransfer(BankAccount from, BankAccount to, double amt) {
// Order by identity hash โ consistent global ordering
BankAccount first = System.identityHashCode(from) < System.identityHashCode(to) ? from : to;
BankAccount second = first == from ? to : from;
synchronized (first) {
synchronized (second) {
from.withdraw(amt);
to.deposit(amt);
}
}
}"Use the right data structure and half your synchronization work disappears."
Thread-safe key-value store. Multiple readers + writers. compute/merge atomic.
Much better than synchronizedMap under contention
Producer-consumer pipelines. Blocks producer when full, consumer when empty.
Good for bounded work queues
Event listeners, config lists. Read-heavy, rare writes.
O(n) writes. Lock-free reads.
High-throughput non-blocking queue. No size limit.
Lock-free CAS operations
Single-variable counters, flags, references. CAS-based.
Fastest thread-safe primitive
Coordination: wait for N events, or sync N threads at a point.
One-shot (Latch) / reusable (Barrier)
Never use HashMap + Collections.synchronizedMap() when ConcurrentHashMap works. Synchronized wrappers lock the entire map for every operation โ poor throughput under contention. ConcurrentHashMap locks individual bins (Java 8+) and uses CAS for most operations, giving near-linear scalability.
// โโ Concurrent Data Structures in Java โโโโโโโโโโโโโโโโโโโโโโ
import java.util.concurrent.*;
import java.util.concurrent.atomic.*;
import java.util.*;
public class ConcurrentDataStructuresDemo {
public static void main(String[] args) throws Exception {
// โโโ 1. ConcurrentHashMap โ thread-safe HashMap โโโโโโ
// Segment-based locking (Java 7) โ CAS + bin-level locking (Java 8+)
// Much better throughput than Collections.synchronizedMap(map)
ConcurrentHashMap<String, Integer> wordCount = new ConcurrentHashMap<>();
// Thread-safe merge operation (compute is atomic)
wordCount.merge("hello", 1, Integer::sum);
wordCount.merge("hello", 1, Integer::sum);
wordCount.computeIfAbsent("world", k -> 0);
wordCount.compute("world", (k, v) -> v == null ? 1 : v + 1);
System.out.println(wordCount); // {hello=2, world=1}
// โโโ 2. BlockingQueue โ producer-consumer backbone โโโ
BlockingQueue<String> queue = new LinkedBlockingQueue<>(10); // bounded
// Producer thread
Thread producer = new Thread(() -> {
String[] tasks = {"task-1", "task-2", "task-3", "POISON"};
for (String t : tasks) {
try {
queue.put(t); // blocks if full
System.out.println("Produced: " + t);
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
}
});
// Consumer thread
Thread consumer = new Thread(() -> {
try {
while (true) {
String task = queue.take(); // blocks if empty
if ("POISON".equals(task)) break;
System.out.println("Consumed: " + task);
}
} catch (InterruptedException e) { Thread.currentThread().interrupt(); }
});
producer.start(); consumer.start();
producer.join(); consumer.join();
// โโโ 3. CopyOnWriteArrayList โ read-heavy workloads โโ
// Every write creates a NEW copy of the array.
// Reads are always lock-free (see the unmodified copy).
// Best when reads >> writes (event listeners, config lists).
CopyOnWriteArrayList<String> listeners = new CopyOnWriteArrayList<>();
listeners.add("listener-1");
listeners.add("listener-2");
// Safe to iterate while another thread modifies
for (String l : listeners) {
System.out.println("Notifying: " + l);
}
// โโโ 4. AtomicInteger / AtomicReference / AtomicLong โ
AtomicInteger counter = new AtomicInteger(0);
counter.incrementAndGet(); // i++, thread-safe
counter.addAndGet(5); // += 5, thread-safe
counter.compareAndSet(6, 10); // CAS: if val==6, set to 10
// AtomicReference โ CAS on object references
AtomicReference<String> ref = new AtomicReference<>("initial");
boolean swapped = ref.compareAndSet("initial", "updated");
System.out.println(swapped + " " + ref.get()); // true updated
// โโโ 5. ConcurrentLinkedQueue โ non-blocking queue โโโ
// Uses lock-free CAS operations โ no blocking, no contention
ConcurrentLinkedQueue<Integer> clq = new ConcurrentLinkedQueue<>();
clq.offer(1); clq.offer(2); clq.offer(3);
System.out.println(clq.poll()); // 1
// โโโ 6. CountDownLatch โ wait for N events โโโโโโโโโโโโ
CountDownLatch latch = new CountDownLatch(3);
for (int i = 0; i < 3; i++) {
int id = i;
new Thread(() -> {
System.out.println("Service-" + id + " started");
latch.countDown(); // decrement counter
}).start();
}
latch.await(); // main blocks until count reaches 0
System.out.println("All services ready โ starting app");
// โโโ 7. CyclicBarrier โ all threads reach a point โโโโ
CyclicBarrier barrier = new CyclicBarrier(3, () ->
System.out.println("All workers reached barrier โ proceeding"));
for (int i = 0; i < 3; i++) {
int id = i;
new Thread(() -> {
System.out.println("Worker-" + id + " at barrier");
try { barrier.await(); } catch (Exception e) {}
System.out.println("Worker-" + id + " proceeding");
}).start();
}
}
}Producers put items into a bounded buffer; consumers take them. Decouples production rate from consumption rate. Implemented via BlockingQueue / asyncio.Queue.
Reuse a fixed set of threads instead of creating/destroying per task. Bounds memory use and creation overhead. ExecutorService, ThreadPoolExecutor.
Recursively split large tasks into smaller ones (fork), compute in parallel, combine results (join). Best for divide-and-conquer algorithms.
Method calls are turned into message objects queued to a dedicated worker thread. The caller gets a Future. Decouples call from execution.
Track failures to an external service. After N failures, open the circuit (fail fast). After timeout, try again (half-open). Prevents cascade failures.
Chain stages connected by queues. Each stage is a worker thread or coroutine. Data flows through: parse โ validate โ enrich โ persist.
// โโ Concurrency Patterns in Java โโโโโโโโโโโโโโโโโโโโโโโโโโโโ
import java.util.concurrent.*;
import java.util.concurrent.atomic.*;
// โโโ 1. Producer-Consumer โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
// Already shown with BoundedBuffer. Here: using BlockingQueue directly.
public class OrderProcessingSystem {
private static final BlockingQueue<Order> orders = new LinkedBlockingQueue<>(50);
private static final AtomicBoolean running = new AtomicBoolean(true);
// Multiple producers โ web handlers push orders
static class OrderReceiver implements Runnable {
public void run() {
while (running.get()) {
Order order = receiveFromNetwork(); // blocks on I/O
try { orders.put(order); }
catch (InterruptedException e) { Thread.currentThread().interrupt(); }
}
}
Order receiveFromNetwork() { return new Order(); }
}
// Multiple consumers โ worker threads process orders
static class OrderProcessor implements Runnable {
public void run() {
while (running.get() || !orders.isEmpty()) {
try {
Order order = orders.poll(100, TimeUnit.MILLISECONDS);
if (order != null) process(order);
} catch (InterruptedException e) { Thread.currentThread().interrupt(); }
}
}
void process(Order o) { System.out.println("Processing " + o); }
}
}
// โโโ 2. Thread Pool โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
// Creating a thread per request is expensive. Pools reuse threads.
public class WebServer {
// Executors.newFixedThreadPool(n): n threads, unbounded queue
// Executors.newCachedThreadPool(): grows/shrinks as needed
// Executors.newScheduledThreadPool(n): for scheduled tasks
private final ExecutorService pool = new ThreadPoolExecutor(
4, // corePoolSize
16, // maximumPoolSize
60L, // keepAliveTime
TimeUnit.SECONDS,
new LinkedBlockingQueue<>(100), // work queue, bounded!
new ThreadPoolExecutor.CallerRunsPolicy() // rejection policy
);
public void handleRequest(Runnable request) {
pool.submit(request);
}
public void shutdown() {
pool.shutdown();
try { pool.awaitTermination(30, TimeUnit.SECONDS); }
catch (InterruptedException e) { pool.shutdownNow(); }
}
}
// โโโ 3. Fork/Join โ divide and conquer parallelism โโโโโโโโโโโโ
import java.util.concurrent.RecursiveTask;
public class ParallelSum extends RecursiveTask<Long> {
private final long[] array;
private final int start, end;
private static final int THRESHOLD = 10_000;
public ParallelSum(long[] array, int start, int end) {
this.array = array; this.start = start; this.end = end;
}
@Override
protected Long compute() {
if (end - start <= THRESHOLD) {
// Small enough โ compute directly
long sum = 0;
for (int i = start; i < end; i++) sum += array[i];
return sum;
}
int mid = (start + end) / 2;
ParallelSum left = new ParallelSum(array, start, mid);
ParallelSum right = new ParallelSum(array, mid, end);
left.fork(); // submit left to pool
long rightResult = right.compute(); // compute right directly
long leftResult = left.join(); // wait for left
return leftResult + rightResult;
}
}
ForkJoinPool pool = ForkJoinPool.commonPool();
long[] data = new long[10_000_000];
Arrays.fill(data, 1L);
long total = pool.invoke(new ParallelSum(data, 0, data.length));
System.out.println("Sum: " + total); // 10,000,000
// โโโ 4. CompletableFuture โ async pipelines โโโโโโโโโโโโโโโโโโโ
CompletableFuture<String> pipeline =
CompletableFuture
.supplyAsync(() -> fetchUser(1)) // async
.thenApplyAsync(user -> fetchOrders(user)) // chain
.thenApplyAsync(orders -> formatReport(orders))
.exceptionally(e -> "Error: " + e.getMessage());
String result = pipeline.get(5, TimeUnit.SECONDS);"Non-blocking I/O lets one thread manage thousands of concurrent connections by never waiting โ it starts an I/O operation and processes other work until the OS signals completion."
| Model | Thread while waiting? | Good for | Example |
|---|---|---|---|
| Blocking I/O | Blocked (idle) | Simple code, few connections | socket.read(), file.read() |
| Non-blocking I/O | Returns immediately | Polling, high conn count | O_NONBLOCK, NIO selector |
| Async I/O (callback) | Free โ callback fires on complete | High concurrency, I/O bound | Node.js fs.readFile |
| Async I/O (await) | Coroutine suspended, not blocked | Readable async code | asyncio, CompletableFuture |
// โโ Async & Non-Blocking I/O in Java โโโโโโโโโโโโโโโโโโโโโโโโ
import java.util.concurrent.*;
import java.util.concurrent.CompletableFuture;
import java.net.http.*;
import java.net.URI;
// โโโ CompletableFuture โ Java's async pipeline โโโโโโโโโโโโโโโโ
public class AsyncDemo {
public static void main(String[] args) throws Exception {
// โโ Basic async task โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
CompletableFuture<String> future = CompletableFuture
.supplyAsync(() -> {
simulateDelay(200);
return "user-data";
}); // runs on ForkJoinPool.commonPool()
// Non-blocking โ do other work while waiting
System.out.println("Main thread doing other work...");
String result = future.get(); // blocks only when we need the result
System.out.println("Got: " + result);
// โโ Chaining: thenApply, thenCompose, thenAccept โโโโโโ
CompletableFuture<String> pipeline = CompletableFuture
.supplyAsync(() -> fetchUserId()) // returns int
.thenApplyAsync(id -> fetchUser(id)) // int โ User (async)
.thenApplyAsync(user -> fetchOrders(user)) // User โ List<Order>
.thenApplyAsync(orders -> formatSummary(orders)) // โ String
.exceptionally(ex -> "Error: " + ex.getMessage());
// โโ Parallel composition โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
CompletableFuture<String> userFuture = CompletableFuture.supplyAsync(() -> fetchUser(1));
CompletableFuture<String> configFuture = CompletableFuture.supplyAsync(() -> fetchConfig());
// Wait for BOTH
CompletableFuture<String> combined = userFuture
.thenCombine(configFuture, (user, cfg) -> user + " + " + cfg);
// Wait for EITHER (race)
CompletableFuture<String> either = userFuture.applyToEither(configFuture, s -> s);
// Wait for ALL N futures
CompletableFuture.allOf(userFuture, configFuture).join();
// โโ Non-blocking HTTP client (Java 11+) โโโโโโโโโโโโโโโ
HttpClient client = HttpClient.newHttpClient();
HttpRequest request = HttpRequest.newBuilder()
.uri(URI.create("https://api.example.com/users/1"))
.header("Accept", "application/json")
.GET()
.build();
// Completely non-blocking โ no thread blocked waiting
CompletableFuture<String> httpFuture = client
.sendAsync(request, HttpResponse.BodyHandlers.ofString())
.thenApply(HttpResponse::body);
System.out.println(httpFuture.get(5, TimeUnit.SECONDS));
// โโ Virtual Threads (Java 21 โ Project Loom) โโโโโโโโโโ
// Lightweight threads scheduled by JVM, not OS.
// Can create MILLIONS of them โ no memory per OS thread.
try (ExecutorService vt = Executors.newVirtualThreadPerTaskExecutor()) {
IntStream.range(0, 1_000_000).forEach(i ->
vt.submit(() -> {
Thread.sleep(100); // blocks virtual thread, not OS thread
return i * 2;
})
);
} // auto-shutdown โ waits for all
// One virtual thread per request โ simple synchronous-style code
// at the throughput of non-blocking I/O!
}
static void simulateDelay(long ms) {
try { Thread.sleep(ms); } catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
}
static int fetchUserId() { simulateDelay(100); return 1; }
static String fetchUser(int i) { simulateDelay(100); return "user-" + i; }
static String fetchConfig() { simulateDelay(50); return "config"; }
static String fetchOrders(String u) { return "orders-of-" + u; }
static String formatSummary(String o) { return "summary: " + o; }
}Defines when writes by one thread are guaranteed visible to another. Without synchronization, the JIT compiler and CPU can reorder operations for performance. volatile and locks establish happens-before ordering.
Modern CPUs have L1/L2/L3 caches per core. A write on core 1 may not be immediately visible on core 2 without a memory barrier. volatile forces cache flush.
Compare-And-Swap: read value, compute new value, atomically swap only if current matches expected. If another thread changed it, retry. Foundation of all Java Atomic classes.
CPU cache lines are 64 bytes. If two threads modify independent variables that share a cache line, they invalidate each other's cache unnecessarily. @Contended annotation pads fields.
Lightweight JVM-managed threads. Create millions โ OS can't handle that many OS threads. Synchronous-style code with non-blocking throughput. Revolutionary for I/O-bound servers.
Python 3.7+ ContextVar provides async-safe per-context storage. Unlike threading.local(), works correctly with async code where multiple coroutines interleave on one thread.
// โโ Advanced โ Java Memory Model & happens-before โโโโโโโโโโโโ
import java.util.concurrent.atomic.*;
import java.util.concurrent.locks.*;
// โโโ 1. Java Memory Model (JMM) โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
// Each thread has its own working memory (CPU cache).
// The JMM defines WHEN a write by one thread becomes VISIBLE to another.
// โโโ 2. volatile โ guarantees visibility + ordering โโโโโโโโโโโ
class StopFlag {
// WITHOUT volatile: JIT may cache 'running' in a register.
// The worker thread may NEVER see running=false.
private volatile boolean running = true; // โ volatile fixes this
public void stop() { running = false; }
public boolean isRunning() { return running; }
}
// volatile DOES NOT guarantee atomicity for compound operations!
class BadVolatileCounter {
private volatile int counter = 0;
// counter++ reads, increments, writes โ NOT atomic even with volatile!
public void increment() { counter++; } // โ STILL a race condition
}
// Use AtomicInteger for atomic compound operations
class GoodAtomicCounter {
private final AtomicInteger counter = new AtomicInteger(0);
public void increment() { counter.incrementAndGet(); } // atomic
public int get() { return counter.get(); }
}
// โโโ 3. happens-before relationship โโโโโโโโโโโโโโโโโโโโโโโโโโ
// A happens-before B means: all writes in A are visible in B.
//
// Rules that establish happens-before:
// โข Thread start: everything before start() is visible inside run()
// โข Thread join: everything in run() is visible after join() returns
// โข synchronized: unlock of monitor X happens-before lock of X
// โข volatile write happens-before subsequent volatile read of same var
// โข Thread.interrupt() happens-before interrupted exception caught
// โโโ 4. Double-Checked Locking โ common pattern, subtle bug โโ
// WITHOUT volatile โ broken in Java!
class SingletonBroken {
private static SingletonBroken instance;
public static SingletonBroken getInstance() {
if (instance == null) {
synchronized (SingletonBroken.class) {
if (instance == null)
instance = new SingletonBroken();
// PROBLEM: object creation is 3 steps:
// 1. Allocate memory
// 2. Call constructor
// 3. Assign to instance
// JVM can reorder steps 2 and 3!
// Another thread sees instance != null but uninitialized object
}
}
return instance;
}
}
// CORRECT double-checked locking โ volatile prevents reordering
class SingletonCorrect {
private static volatile SingletonCorrect instance; // โ volatile!
public static SingletonCorrect getInstance() {
if (instance == null) {
synchronized (SingletonCorrect.class) {
if (instance == null)
instance = new SingletonCorrect(); // safe with volatile
}
}
return instance;
}
}
// BEST: Initialisation-on-Demand Holder โ lazy, thread-safe, no sync needed
class SingletonBest {
private SingletonBest() {}
private static class Holder {
// Class loading is atomic โ JVM guarantees this is initialised once
static final SingletonBest INSTANCE = new SingletonBest();
}
public static SingletonBest getInstance() { return Holder.INSTANCE; }
}
// โโโ 5. StampedLock โ optimistic reads โโโโโโโโโโโโโโโโโโโโโโโโ
// Java 8+ โ most optimistic read mode: no blocking at all
StampedLock sl = new StampedLock();
// Optimistic read: no lock taken โ just a stamp
long stamp = sl.tryOptimisticRead();
double balance = this.balance; // read WITHOUT lock
if (!sl.validate(stamp)) { // if a write happened since our read...
stamp = sl.readLock(); // fall back to a real read lock
try { balance = this.balance; }
finally { sl.unlockRead(stamp); }
}"13 hand-picked questions across Beginner, Intermediate, and Advanced levels โ with detailed, production-grade answers."
| Problem | Root Cause |
|---|---|
| Race Condition | Shared mutable state, unprotected compound ops |
| Deadlock | Circular lock acquisition order |
| Livelock | Threads repeatedly yield to each other |
| Starvation | Some threads never get scheduled |
| Visibility bug | CPU cache / compiler reordering |
| Missed notification | Signal sent before thread waits |
| Thread leak | Threads not properly terminated |
Concurrency patterns are closely tied to design patterns. These patterns directly address concurrent system design: