Skip to content

Fearless Concurrency

Welcome to the Fearless Concurrency series! Rust's approach to concurrent programming is revolutionary - it guarantees thread safety at compile time, preventing data races before your code even runs.

What is Fearless Concurrency?

Fearless concurrency means you can write concurrent code without worrying about:

  • Data Races - Two threads accessing the same data simultaneously
  • Race Conditions - Output depends on execution order
  • Memory Safety Issues - Invalid memory access in concurrent code

Rust achieves this through:

  • Ownership System - Enforces safe memory access
  • Type System - Send and Sync traits
  • Borrow Checker - Validates concurrent access at compile time

Learning Path

mermaid
graph LR
    A[Concurrency Basics] --> B[Concurrency Patterns]
    B --> C[Async Runtime]

    A --> A1[Threads]
    A --> A2[Message Passing]
    A --> A3[Shared State]

    B --> B1[Thread Pools]
    B --> B2[Channel Patterns]
    B --> B3[Actor Model]

    C --> C1[Tokio]
    C --> C2[Async-std]
    C --> C3[Futures]

Core Concepts

Send Trait

A type is Send if it can be safely transferred between threads.

rust
use std::thread;

fn main() {
    let v = vec![1, 2, 3];

    let handle = thread::spawn(move || {
        println!("Here's a vector: {:?}", v);
    });

    handle.join().unwrap();
}

Sync Trait

A type is Sync if its references can be safely shared between threads.

rust
use std::sync::Arc;
use std::thread;

fn main() {
    let data = Arc::new(vec![1, 2, 3]);

    for i in 0..3 {
        let data = Arc::clone(&data);
        thread::spawn(move || {
            println!("Thread {} sees: {:?}", i, data);
        });
    }
}

Concurrency Models

1. Message Passing

"Communicating Sequential Processes" - threads communicate by sending messages.

rust
use std::sync::mpsc;
use std::thread;

fn main() {
    let (tx, rx) = mpsc::channel();

    thread::spawn(move || {
        let val = String::from("hi");
        tx.send(val).unwrap();
    });

    let received = rx.recv().unwrap();
    println!("Got: {}", received);
}

2. Shared State

Multiple threads share access to synchronized data.

rust
use std::sync::{Arc, Mutex};
use std::thread;

fn main() {
    let counter = Arc::new(Mutex::new(0));
    let mut handles = vec![];

    for _ in 0..10 {
        let counter = Arc::clone(&counter);
        let handle = thread::spawn(move || {
            let mut num = counter.lock().unwrap();
            *num += 1;
        });
        handles.push(handle);
    }

    for handle in handles {
        handle.join().unwrap();
    }

    println!("Result: {}", *counter.lock().unwrap());
}

3. Async/Await

Zero-cost asynchronous programming with Futures.

rust
async fn hello() {
    println!("Hello");
    async_world().await;
}

async fn async_world() {
    println!("World");
}

#[tokio::main]
async fn main() {
    hello().await;
}

Common Patterns

Thread Pool

rust
use std::thread;
use std::sync::mpsc;
use std::sync::Arc;
use std::sync::Mutex;

struct Worker {
    id: usize,
    thread: thread::JoinHandle<()>,
}

Channel Patterns

  • Multiple Producer, Single Consumer (mpsc)
  • Rendezvous Channels - Both sides wait
  • Unbounded Channels - No buffer limit

Actor Model

Each actor has its own state and processes messages sequentially.

Performance Considerations

  • Thread spawning has overhead (use thread pools)
  • Channels have synchronization overhead
  • Mutex contention can hurt performance
  • Lock-free algorithms for high-contention scenarios

Best Practices

  1. Prefer message passing over shared state when possible
  2. Minimize lock scope - hold locks for the shortest time
  3. Avoid deadlocks - always acquire locks in the same order
  4. Use Arc for shared ownership - reference counting for multiple threads
  5. Choose appropriate primitives - Mutex vs RwLock vs Atomic types

Further Reading

Next Steps