Testing Patterns

Problem

Effective testing requires understanding Rust’s test framework, organizing tests properly, and using appropriate testing strategies for different scenarios.

Solution

1. Unit Tests

pub fn add(a: i32, b: i32) -> i32 {
    a + b
}

#[cfg(test)]
mod tests {
    use super::*;

    #[test]
    fn test_add() {
        assert_eq!(add(2, 3), 5);
    }

    #[test]
    fn test_add_negative() {
        assert_eq!(add(-1, 1), 0);
    }

    #[test]
    #[should_panic(expected = "overflow")]
    fn test_overflow() {
        panic!("overflow");
    }

    #[test]
    fn test_result() -> Result<(), String> {
        if add(2, 2) == 4 {
            Ok(())
        } else {
            Err(String::from("Addition failed"))
        }
    }

    #[test]
    #[ignore]  // Run with: cargo test -- --ignored
    fn expensive_test() {
        // Long-running test
    }
}

// cargo test
// cargo test test_add  // Run specific test
// cargo test -- --nocapture  // Show println! output

2. Integration Tests

// tests/integration_test.rs
use mylib::User;

#[test]
fn test_user_creation() {
    let user = User::new("alice", "alice@example.com");
    assert_eq!(user.username(), "alice");
}

#[test]
fn test_user_validation() {
    let result = User::validate_email("invalid-email");
    assert!(result.is_err());
}

// cargo test --test integration_test

3. Property-Based Testing with proptest

// Cargo.toml
// [dev-dependencies]
// proptest = "1.5"

use proptest::prelude::*;

proptest! {
    #[test]
    fn test_add_commutative(a: i32, b: i32) {
        prop_assert_eq!(add(a, b), add(b, a));
    }

    #[test]
    fn test_reverse_reverse(s in ".*") {
        let reversed_twice: String = s.chars()
            .rev()
            .collect::<String>()
            .chars()
            .rev()
            .collect();
        prop_assert_eq!(s, reversed_twice);
    }

    #[test]
    fn test_vec_len(v in prop::collection::vec(any::<i32>(), 0..100)) {
        let len = v.len();
        let cloned = v.clone();
        prop_assert_eq!(len, cloned.len());
    }
}

4. Benchmarking with Criterion

// Cargo.toml
// [dev-dependencies]
// criterion = "0.5"

// benches/my_benchmark.rs
use criterion::{black_box, criterion_group, criterion_main, Criterion};

fn fibonacci(n: u64) -> u64 {
    match n {
        0 => 1,
        1 => 1,
        n => fibonacci(n - 1) + fibonacci(n - 2),
    }
}

fn criterion_benchmark(c: &mut Criterion) {
    c.bench_function("fib 20", |b| {
        b.iter(|| fibonacci(black_box(20)))
    });
}

criterion_group!(benches, criterion_benchmark);
criterion_main!(benches);

// cargo bench

5. Mocking and Test Doubles

trait Database {
    fn get_user(&self, id: u64) -> Option<User>;
}

struct MockDatabase {
    users: std::collections::HashMap<u64, User>,
}

impl Database for MockDatabase {
    fn get_user(&self, id: u64) -> Option<User> {
        self.users.get(&id).cloned()
    }
}

#[cfg(test)]
mod tests {
    use super::*;

    #[test]
    fn test_service_with_mock() {
        let mut mock = MockDatabase {
            users: std::collections::HashMap::new(),
        };
        mock.users.insert(1, User::new("alice", "alice@example.com"));

        let service = UserService::new(Box::new(mock));
        let user = service.find_user(1).unwrap();
        assert_eq!(user.username(), "alice");
    }
}

How It Works

Rust Test Framework Architecture

Rust’s built-in test framework provides zero-cost test execution:

  1. Compilation: Tests compile only when running cargo test (controlled by #[cfg(test)])
  2. Discovery: Test runner automatically discovers functions marked with #[test]
  3. Isolation: Each test runs in its own thread by default
  4. Parallelization: Tests execute concurrently (control with --test-threads)
  5. Reporting: Framework captures output and reports pass/fail with timing

Test Organization

Tests organize into three categories:

Unit Tests (#[cfg(test)] mod tests):

  • Live alongside source code in the same file
  • Test private functions and internal implementation
  • Compiled only during testing
  • Access to module internals

Integration Tests (tests/ directory):

  • Separate crate that imports your library
  • Test public API only
  • Each file is a separate crate
  • Cannot test binary crates (only libraries)

Documentation Tests (doc comments):

  • Code examples in /// comments
  • Ensure documentation stays up-to-date
  • Compile and run during cargo test --doc

Assertion Macros

Standard library provides assertion macros with automatic failure messages:

  • assert!(condition) - Boolean assertion
  • assert_eq!(left, right) - Equality (implements PartialEq + Debug)
  • assert_ne!(left, right) - Inequality
  • debug_assert!() - Only in debug builds (zero cost in release)

Custom messages: assert!(x > 0, "x must be positive, got {}", x)

Property-Based Testing Mechanism

proptest generates random inputs to test properties:

  1. Strategy Definition: Define input generation strategy (e.g., any::<i32>(), 0..100)
  2. Random Input: Generate hundreds of random test cases
  3. Property Verification: Run test with each input
  4. Shrinking: If test fails, minimize failing input to smallest case
  5. Regression: Failed cases saved to file to prevent regressions

Shrinking Example:

Test fails with input: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
Shrink to: [1, 2, 3, 4, 5]
Shrink to: [1, 2, 3]
Shrink to: [1]
Minimal failing case: [1]

Benchmarking Statistics

Criterion provides statistical analysis of benchmarks:

  1. Warmup: Run benchmark multiple times to warm caches
  2. Measurement: Collect many samples of execution time
  3. Statistical Analysis: Calculate mean, standard deviation, outliers
  4. Comparison: Compare against previous baseline
  5. Reporting: Generate HTML reports with graphs

Prevents optimization removal: black_box() prevents compiler from optimizing away benchmarked code.

Variations

1. Test Fixtures and Setup

Use helper functions for common test setup:

#[cfg(test)]
mod tests {
    use super::*;

    fn setup_test_database() -> TestDatabase {
        let db = TestDatabase::new();
        db.seed_data();
        db
    }

    #[test]
    fn test_query() {
        let db = setup_test_database();
        let result = db.query("SELECT * FROM users");
        assert!(!result.is_empty());
    }

    #[test]
    fn test_insert() {
        let db = setup_test_database();
        db.insert_user("alice");
        assert_eq!(db.count_users(), 1);
    }
}

Trade-offs: Reduces duplication but may hide test dependencies.

2. Snapshot Testing with insta

Test complex outputs against saved snapshots:

// Cargo.toml
// [dev-dependencies]
// insta = "1.40"

use insta::assert_snapshot;

#[test]
fn test_render_output() {
    let output = render_template("Hello {{ name }}!", "Alice");
    assert_snapshot!(output);
}

// First run creates snapshot file
// Subsequent runs compare against snapshot
// Update with: cargo insta review

Trade-offs: Great for complex outputs but snapshots can become stale.

3. Async Testing with tokio::test

Test async code with async test runtime:

// Cargo.toml
// [dev-dependencies]
// tokio = { version = "1.41", features = ["test-util", "macros"] }

#[tokio::test]
async fn test_async_function() {
    let result = fetch_data().await;
    assert!(result.is_ok());
}

#[tokio::test]
async fn test_timeout() {
    let result = tokio::time::timeout(
        Duration::from_secs(1),
        slow_operation()
    ).await;
    assert!(result.is_err());  // Should timeout
}

Trade-offs: Necessary for async code but adds runtime overhead.

4. Parameterized Tests with rstest

Run same test with different inputs:

// Cargo.toml
// [dev-dependencies]
// rstest = "0.23"

use rstest::rstest;

#[rstest]
#[case(2, 3, 5)]
#[case(0, 0, 0)]
#[case(-1, 1, 0)]
#[case(100, 200, 300)]
fn test_add(#[case] a: i32, #[case] b: i32, #[case] expected: i32) {
    assert_eq!(add(a, b), expected);
}

Trade-offs: More concise than multiple tests but failure messages less specific.

5. Mutation Testing with cargo-mutants

Verify test quality by introducing bugs:

// Install: cargo install cargo-mutants

// Run mutation testing
// cargo mutants

// Mutants: Code changes that should be caught by tests
// Example mutations:
// - Change + to -
// - Change > to >=
// - Remove return statements
// - Change true to false

// Good tests catch all mutations
// Surviving mutants indicate missing test coverage

Trade-offs: Finds gaps in test coverage but slow to run.

6. Code Coverage with cargo-tarpaulin

Measure test coverage:

// Install: cargo install cargo-tarpaulin

// Run coverage
// cargo tarpaulin --out Html --output-dir coverage

// Generates HTML report showing:
// - Line coverage percentage
// - Uncovered lines highlighted
// - Function coverage
// - Branch coverage

Trade-offs: Identifies untested code but high coverage doesn’t guarantee quality.

Common Pitfalls

1. Testing Implementation Instead of Behavior

Problem: Tests coupled to implementation details:

// Bad: Tests internal implementation
#[test]
fn test_internal_cache() {
    let cache = MyStruct::new();
    // Accessing private field or implementation detail
    assert!(cache.internal_cache.is_empty());
}

Solution: Test observable behavior through public API:

// Good: Tests public behavior
#[test]
fn test_cache_behavior() {
    let mut cache = MyStruct::new();
    cache.set("key", "value");
    assert_eq!(cache.get("key"), Some(&"value"));
    assert_eq!(cache.get("missing"), None);
}

2. Not Using Result in Tests

Problem: Using unwrap or panic in tests:

// Bad: Unclear error messages
#[test]
fn test_parse() {
    let result = parse_config("config.toml").unwrap();  // Panic with no context!
    assert_eq!(result.port, 8080);
}

Solution: Return Result for better error messages:

// Good: Descriptive errors with ?
#[test]
fn test_parse() -> Result<(), Box<dyn std::error::Error>> {
    let result = parse_config("config.toml")?;  // Error includes context
    assert_eq!(result.port, 8080);
    Ok(())
}

3. Flaky Tests with Time Dependencies

Problem: Tests depend on system time or execution timing:

// Bad: Race condition
#[test]
fn test_async_operation() {
    spawn_async_task();
    std::thread::sleep(Duration::from_millis(100));  // Hope it finishes!
    assert!(task_completed());  // Flaky!
}

Solution: Use synchronization or deterministic time:

// Good: Wait for completion signal
#[tokio::test]
async fn test_async_operation() {
    let (tx, rx) = oneshot::channel();

    spawn_async_task(move || {
        // Task work...
        tx.send(()).unwrap();
    });

    rx.await.unwrap();  // Wait for actual completion
    assert!(task_completed());
}

// Or use tokio's time mocking
#[tokio::test(start_paused = true)]
async fn test_with_time_control() {
    let task = tokio::spawn(async {
        tokio::time::sleep(Duration::from_secs(60)).await;
        "done"
    });

    tokio::time::advance(Duration::from_secs(60)).await;  // Fast-forward
    let result = task.await.unwrap();
    assert_eq!(result, "done");
}

4. Not Cleaning Up Test Resources

Problem: Tests leave behind files or connections:

// Bad: Leaves test files
#[test]
fn test_file_write() {
    write_file("test.txt", "data");
    let content = read_file("test.txt");
    assert_eq!(content, "data");
    // test.txt left behind!
}

Solution: Clean up in test or use temp directories:

// Good: Use temp directory
#[test]
fn test_file_write() -> std::io::Result<()> {
    let temp_dir = tempfile::tempdir()?;
    let file_path = temp_dir.path().join("test.txt");

    write_file(&file_path, "data");
    let content = read_file(&file_path);
    assert_eq!(content, "data");

    // temp_dir automatically cleaned up on drop
    Ok(())
}

5. Ignoring Test Performance

Problem: Slow tests that block development:

// Bad: Unnecessarily slow test
#[test]
fn test_complex_scenario() {
    // Sets up entire database
    let db = setup_full_production_database();
    // Runs expensive migrations
    db.run_all_migrations();
    // Tests one simple query
    let result = db.query_simple();
    assert!(result.is_ok());
}

Solution: Use minimal setup or move to integration tests:

// Good: Minimal setup for unit test
#[test]
fn test_simple_query() {
    let db = MockDatabase::with_test_data();
    let result = db.query_simple();
    assert!(result.is_ok());
}

// Move expensive tests to integration tests run less frequently
#[test]
#[ignore]  // Run separately with: cargo test -- --ignored
fn test_full_migration_integration() {
    let db = setup_full_production_database();
    db.run_all_migrations();
    // Comprehensive testing...
}

6. Over-Mocking

Problem: Mocking everything reduces test value:

// Bad: Too much mocking
#[test]
fn test_user_service() {
    let mock_db = MockDatabase::new();
    let mock_cache = MockCache::new();
    let mock_logger = MockLogger::new();
    let mock_metrics = MockMetrics::new();

    // Test just calls mocks, doesn't test real logic
    let service = UserService::new(mock_db, mock_cache, mock_logger, mock_metrics);
    service.do_something();
    assert!(mock_db.was_called());
}

Solution: Mock only external dependencies, test real logic:

// Good: Mock external I/O, test business logic
#[test]
fn test_user_service() {
    let mock_db = MockDatabase::with_users(vec![
        User::new("alice", "alice@example.com"),
    ]);

    let service = UserService::new(mock_db);

    // Test actual business logic
    let result = service.find_user_by_email("alice@example.com");
    assert!(result.is_some());
    assert_eq!(result.unwrap().username(), "alice");
}

Related Patterns

Related Patterns: See Write Effective Tests for test writing best practices, Debug and Logging for debugging test failures, Code Documentation for documentation tests, Async Programming Patterns for async testing.

Tutorial: See Beginner Tutorial for testing basics.

Cookbook: See Cookbook Recipe 15 - Writing Unit Tests, Recipe 16 - Integration Testing for complete examples.

Last updated