[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"article-deep-evm-24-context-propagation-async-rust":3},{"article":4,"author":51},{"id":5,"category_id":6,"title":7,"slug":8,"excerpt":9,"content_md":10,"content_html":11,"locale":12,"author_id":13,"published":14,"published_at":15,"meta_title":16,"meta_description":17,"focus_keyword":18,"og_image":19,"canonical_url":19,"robots_meta":20,"created_at":15,"updated_at":15,"tags":21,"category_name":31,"related_articles":32},"d0000000-0000-0000-0000-000000000124","a0000000-0000-0000-0000-000000000006","Deep EVM #24: Context Propagation in Async Rust — Deadlines, Cancellation, and Tracing","deep-evm-24-context-propagation-async-rust","Implement Go-style context propagation in async Rust with deadlines, cancellation tokens, and tracing span propagation across async boundaries.","## The Missing Context\n\nGo has `context.Context` — a request-scoped value that carries deadlines, cancellation signals, and key-value pairs across API boundaries and goroutines. Every well-written Go function takes a `ctx context.Context` as its first parameter.\n\nRust has no built-in equivalent. Tokio provides cancellation via `CancellationToken` and timeouts via `tokio::time::timeout`, but there is no unified context type that propagates deadlines, cancellation, and metadata through an async call chain. Let us build one.\n\n## Building a Context Type\n\nOur context needs three capabilities: deadlines, cancellation, and key-value metadata:\n\n```rust\nuse std::sync::Arc;\nuse std::time::{Duration, Instant};\nuse tokio::sync::watch;\nuse tokio_util::sync::CancellationToken;\n\n#[derive(Clone)]\npub struct Context {\n    deadline: Option\u003CInstant>,\n    cancel_token: CancellationToken,\n    metadata: Arc\u003CContextMetadata>,\n}\n\nstruct ContextMetadata {\n    request_id: String,\n    trace_id: Option\u003CString>,\n    parent: Option\u003CArc\u003CContextMetadata>>,\n    values: std::collections::HashMap\u003CString, String>,\n}\n\nimpl Context {\n    \u002F\u002F\u002F Create a background context with no deadline or cancellation.\n    pub fn background() -> Self {\n        Self {\n            deadline: None,\n            cancel_token: CancellationToken::new(),\n            metadata: Arc::new(ContextMetadata {\n                request_id: uuid::Uuid::new_v4().to_string(),\n                trace_id: None,\n                parent: None,\n                values: std::collections::HashMap::new(),\n            }),\n        }\n    }\n\n    \u002F\u002F\u002F Create a child context with a deadline.\n    pub fn with_deadline(&self, deadline: Instant) -> Self {\n        let effective_deadline = match self.deadline {\n            Some(existing) => existing.min(deadline),\n            None => deadline,\n        };\n        Self {\n            deadline: Some(effective_deadline),\n            cancel_token: self.cancel_token.child_token(),\n            metadata: self.metadata.clone(),\n        }\n    }\n\n    \u002F\u002F\u002F Create a child context with a timeout relative to now.\n    pub fn with_timeout(&self, duration: Duration) -> Self {\n        self.with_deadline(Instant::now() + duration)\n    }\n\n    \u002F\u002F\u002F Cancel this context and all children.\n    pub fn cancel(&self) {\n        self.cancel_token.cancel();\n    }\n\n    \u002F\u002F\u002F Check if the context is still valid.\n    pub fn is_done(&self) -> bool {\n        if self.cancel_token.is_cancelled() {\n            return true;\n        }\n        if let Some(deadline) = self.deadline {\n            return Instant::now() >= deadline;\n        }\n        false\n    }\n\n    \u002F\u002F\u002F Time remaining until deadline, or None if no deadline.\n    pub fn remaining(&self) -> Option\u003CDuration> {\n        self.deadline.map(|d| d.saturating_duration_since(Instant::now()))\n    }\n\n    \u002F\u002F\u002F Get the request ID for tracing.\n    pub fn request_id(&self) -> &str {\n        &self.metadata.request_id\n    }\n}\n```\n\n## Deadline-Aware Operations\n\nWrap async operations with context-aware timeouts:\n\n```rust\nimpl Context {\n    \u002F\u002F\u002F Run a future with this context's deadline.\n    \u002F\u002F\u002F Returns Err if the context expires before the future completes.\n    pub async fn run\u003CF, T>(&self, future: F) -> Result\u003CT, ContextError>\n    where\n        F: std::future::Future\u003COutput = T>,\n    {\n        if self.is_done() {\n            return Err(ContextError::DeadlineExceeded);\n        }\n\n        let cancel = self.cancel_token.clone();\n\n        match self.deadline {\n            Some(deadline) => {\n                let timeout = deadline.saturating_duration_since(Instant::now());\n                tokio::select! {\n                    result = future => Ok(result),\n                    _ = tokio::time::sleep(timeout) => {\n                        Err(ContextError::DeadlineExceeded)\n                    }\n                    _ = cancel.cancelled() => {\n                        Err(ContextError::Cancelled)\n                    }\n                }\n            }\n            None => {\n                tokio::select! {\n                    result = future => Ok(result),\n                    _ = cancel.cancelled() => {\n                        Err(ContextError::Cancelled)\n                    }\n                }\n            }\n        }\n    }\n}\n\n#[derive(Debug, thiserror::Error)]\npub enum ContextError {\n    #[error(\"context deadline exceeded\")]\n    DeadlineExceeded,\n    #[error(\"context cancelled\")]\n    Cancelled,\n}\n```\n\nUsage in a service:\n\n```rust\nasync fn fetch_user(ctx: &Context, db: &PgPool, id: i64) -> Result\u003CUser> {\n    ctx.run(async {\n        sqlx::query_as::\u003C_, User>(\"SELECT * FROM users WHERE id = $1\")\n            .bind(id)\n            .fetch_one(db)\n            .await\n    }).await\n    .map_err(|e| match e {\n        ContextError::DeadlineExceeded => {\n            anyhow::anyhow!(\"Database query timed out\")\n        }\n        ContextError::Cancelled => {\n            anyhow::anyhow!(\"Request was cancelled\")\n        }\n    })?\n    .map_err(Into::into)\n}\n```\n\n## Cancellation with CancellationToken\n\nTokio's `CancellationToken` supports hierarchical cancellation — cancelling a parent token automatically cancels all children:\n\n```rust\nuse tokio_util::sync::CancellationToken;\n\nasync fn handle_request(parent_token: CancellationToken) {\n    let child_token = parent_token.child_token();\n\n    \u002F\u002F Spawn a subtask with the child token\n    let handle = tokio::spawn(async move {\n        tokio::select! {\n            result = do_expensive_work() => {\n                tracing::info!(\"Work completed: {:?}\", result);\n            }\n            _ = child_token.cancelled() => {\n                tracing::info!(\"Work cancelled\");\n            }\n        }\n    });\n\n    \u002F\u002F If parent is cancelled, child is automatically cancelled\n    tokio::time::sleep(Duration::from_secs(5)).await;\n    parent_token.cancel(); \u002F\u002F Cancels child_token too\n\n    handle.await.unwrap();\n}\n```\n\n### Graceful Shutdown with Cancellation\n\nUse a root cancellation token for coordinated shutdown across all services:\n\n```rust\n#[tokio::main]\nasync fn main() -> anyhow::Result\u003C()> {\n    let shutdown = CancellationToken::new();\n\n    \u002F\u002F Spawn services with child tokens\n    let http_server = tokio::spawn(\n        run_http_server(shutdown.child_token())\n    );\n    let block_watcher = tokio::spawn(\n        run_block_watcher(shutdown.child_token())\n    );\n    let metrics_server = tokio::spawn(\n        run_metrics_server(shutdown.child_token())\n    );\n\n    \u002F\u002F Wait for shutdown signal\n    tokio::signal::ctrl_c().await?;\n    tracing::info!(\"Shutdown signal received\");\n\n    \u002F\u002F Cancel all services\n    shutdown.cancel();\n\n    \u002F\u002F Wait for graceful shutdown with a timeout\n    let timeout = Duration::from_secs(30);\n    tokio::time::timeout(timeout, async {\n        let _ = tokio::join!(http_server, block_watcher, metrics_server);\n    }).await.ok();\n\n    tracing::info!(\"Shutdown complete\");\n    Ok(())\n}\n```\n\n## tokio::select! for Cancellation Patterns\n\n`tokio::select!` is the primary mechanism for responding to cancellation:\n\n```rust\nasync fn process_with_cancellation(\n    ctx: &Context,\n    items: Vec\u003CItem>,\n) -> Result\u003CVec\u003CProcessedItem>> {\n    let mut results = Vec::with_capacity(items.len());\n\n    for item in items {\n        \u002F\u002F Check context before each item\n        if ctx.is_done() {\n            tracing::warn!(\n                processed = results.len(),\n                remaining = items.len() - results.len(),\n                \"Context expired, returning partial results\"\n            );\n            break;\n        }\n\n        let result = ctx.run(process_item(&item)).await?;\n        results.push(result);\n    }\n\n    Ok(results)\n}\n```\n\n### The Drop Guard Pattern\n\nEnsure cleanup happens even when a future is cancelled:\n\n```rust\nstruct CleanupGuard {\n    resource: Option\u003CTempResource>,\n}\n\nimpl Drop for CleanupGuard {\n    fn drop(&mut self) {\n        if let Some(resource) = self.resource.take() {\n            \u002F\u002F Synchronous cleanup\n            resource.release();\n            tracing::debug!(\"Resource cleaned up on drop\");\n        }\n    }\n}\n\nasync fn work_with_cleanup() -> Result\u003C()> {\n    let resource = acquire_resource().await?;\n    let _guard = CleanupGuard {\n        resource: Some(resource.clone()),\n    };\n\n    \u002F\u002F Even if this future is cancelled (dropped),\n    \u002F\u002F the guard's Drop runs and cleans up\n    do_work(&resource).await?;\n\n    \u002F\u002F Successful completion — prevent double cleanup\n    _guard.resource = None;\n    resource.commit().await?;\n    Ok(())\n}\n```\n\n## Tracing Spans Across Async Boundaries\n\nTracing spans do not automatically propagate across `tokio::spawn` boundaries. You must explicitly carry them:\n\n```rust\nuse tracing::{info_span, Instrument};\n\nasync fn handle_request(req: Request) -> Response {\n    let span = info_span!(\n        \"request\",\n        method = %req.method(),\n        path = %req.uri().path(),\n        request_id = %uuid::Uuid::new_v4(),\n    );\n\n    async move {\n        \u002F\u002F This span is active in the current task\n        tracing::info!(\"Processing request\");\n\n        \u002F\u002F Spawn a subtask — must explicitly attach the span\n        let current_span = tracing::Span::current();\n        let handle = tokio::spawn(\n            async move {\n                \u002F\u002F This subtask inherits the parent span\n                tracing::info!(\"Subtask running\");\n                do_background_work().await\n            }\n            .instrument(info_span!(parent: &current_span, \"subtask\"))\n        );\n\n        let result = handle.await?;\n        tracing::info!(\"Request complete\");\n        result\n    }\n    .instrument(span)\n    .await\n}\n```\n\n### Structured Concurrency\n\nCombine context propagation with structured concurrency to ensure all spawned work is tracked and cancellable:\n\n```rust\nstruct TaskGroup {\n    handles: Vec\u003Ctokio::task::JoinHandle\u003Canyhow::Result\u003C()>>>,\n    cancel: CancellationToken,\n}\n\nimpl TaskGroup {\n    fn new(cancel: CancellationToken) -> Self {\n        Self {\n            handles: Vec::new(),\n            cancel,\n        }\n    }\n\n    fn spawn\u003CF>(&mut self, name: &str, future: F)\n    where\n        F: std::future::Future\u003COutput = anyhow::Result\u003C()>> + Send + 'static,\n    {\n        let cancel = self.cancel.child_token();\n        let span = info_span!(\"task\", name = name);\n\n        let handle = tokio::spawn(\n            async move {\n                tokio::select! {\n                    result = future => result,\n                    _ = cancel.cancelled() => {\n                        tracing::info!(\"Task cancelled\");\n                        Ok(())\n                    }\n                }\n            }\n            .instrument(span)\n        );\n\n        self.handles.push(handle);\n    }\n\n    async fn join_all(self) -> anyhow::Result\u003C()> {\n        let results = futures::future::join_all(self.handles).await;\n        for result in results {\n            result??;\n        }\n        Ok(())\n    }\n}\n```\n\nUsage:\n\n```rust\nlet cancel = CancellationToken::new();\nlet mut group = TaskGroup::new(cancel.clone());\n\ngroup.spawn(\"block_processor\", process_blocks(db.clone()));\ngroup.spawn(\"price_oracle\", update_prices(cache.clone()));\ngroup.spawn(\"metrics\", publish_metrics(registry.clone()));\n\n\u002F\u002F Cancel all tasks on shutdown\ntokio::signal::ctrl_c().await?;\ncancel.cancel();\ngroup.join_all().await?;\n```\n\n## Putting It All Together\n\nHere is a complete example combining Context, CancellationToken, and tracing spans in an Axum handler:\n\n```rust\nasync fn api_handler(\n    State(state): State\u003CAppState>,\n    req: Request,\n) -> Result\u003CJson\u003CApiResponse>, AppError> {\n    \u002F\u002F Create context with 5-second deadline\n    let ctx = Context::background()\n        .with_timeout(Duration::from_secs(5));\n\n    let span = info_span!(\n        \"api\",\n        request_id = %ctx.request_id(),\n        remaining_ms = ctx.remaining()\n            .map(|d| d.as_millis() as u64)\n            .unwrap_or(0),\n    );\n\n    async move {\n        let user = fetch_user(&ctx, &state.db, user_id).await?;\n        let orders = fetch_orders(&ctx, &state.db, user_id).await?;\n\n        \u002F\u002F Check remaining time before expensive operation\n        if ctx.remaining().map(|d| d \u003C Duration::from_secs(1)).unwrap_or(false) {\n            tracing::warn!(\"Less than 1s remaining, skipping enrichment\");\n            return Ok(Json(ApiResponse::partial(user, orders)));\n        }\n\n        let enriched = enrich_orders(&ctx, &state.cache, orders).await?;\n        Ok(Json(ApiResponse::full(user, enriched)))\n    }\n    .instrument(span)\n    .await\n}\n```\n\n## Conclusion\n\nContext propagation in async Rust requires explicit effort but delivers enormous benefits: deadline-aware operations that fail fast instead of hanging, hierarchical cancellation that cleanly shuts down complex systems, and tracing spans that survive async boundaries. Build a Context type, use CancellationToken for lifecycle management, instrument with tracing spans, and wrap spawned work in TaskGroups for structured concurrency. These patterns transform scattered async tasks into a coherent, observable, and controllable system.","\u003Ch2 id=\"the-missing-context\">The Missing Context\u003C\u002Fh2>\n\u003Cp>Go has \u003Ccode>context.Context\u003C\u002Fcode> — a request-scoped value that carries deadlines, cancellation signals, and key-value pairs across API boundaries and goroutines. Every well-written Go function takes a \u003Ccode>ctx context.Context\u003C\u002Fcode> as its first parameter.\u003C\u002Fp>\n\u003Cp>Rust has no built-in equivalent. Tokio provides cancellation via \u003Ccode>CancellationToken\u003C\u002Fcode> and timeouts via \u003Ccode>tokio::time::timeout\u003C\u002Fcode>, but there is no unified context type that propagates deadlines, cancellation, and metadata through an async call chain. Let us build one.\u003C\u002Fp>\n\u003Ch2 id=\"building-a-context-type\">Building a Context Type\u003C\u002Fh2>\n\u003Cp>Our context needs three capabilities: deadlines, cancellation, and key-value metadata:\u003C\u002Fp>\n\u003Cpre>\u003Ccode class=\"language-rust\">use std::sync::Arc;\nuse std::time::{Duration, Instant};\nuse tokio::sync::watch;\nuse tokio_util::sync::CancellationToken;\n\n#[derive(Clone)]\npub struct Context {\n    deadline: Option&lt;Instant&gt;,\n    cancel_token: CancellationToken,\n    metadata: Arc&lt;ContextMetadata&gt;,\n}\n\nstruct ContextMetadata {\n    request_id: String,\n    trace_id: Option&lt;String&gt;,\n    parent: Option&lt;Arc&lt;ContextMetadata&gt;&gt;,\n    values: std::collections::HashMap&lt;String, String&gt;,\n}\n\nimpl Context {\n    \u002F\u002F\u002F Create a background context with no deadline or cancellation.\n    pub fn background() -&gt; Self {\n        Self {\n            deadline: None,\n            cancel_token: CancellationToken::new(),\n            metadata: Arc::new(ContextMetadata {\n                request_id: uuid::Uuid::new_v4().to_string(),\n                trace_id: None,\n                parent: None,\n                values: std::collections::HashMap::new(),\n            }),\n        }\n    }\n\n    \u002F\u002F\u002F Create a child context with a deadline.\n    pub fn with_deadline(&amp;self, deadline: Instant) -&gt; Self {\n        let effective_deadline = match self.deadline {\n            Some(existing) =&gt; existing.min(deadline),\n            None =&gt; deadline,\n        };\n        Self {\n            deadline: Some(effective_deadline),\n            cancel_token: self.cancel_token.child_token(),\n            metadata: self.metadata.clone(),\n        }\n    }\n\n    \u002F\u002F\u002F Create a child context with a timeout relative to now.\n    pub fn with_timeout(&amp;self, duration: Duration) -&gt; Self {\n        self.with_deadline(Instant::now() + duration)\n    }\n\n    \u002F\u002F\u002F Cancel this context and all children.\n    pub fn cancel(&amp;self) {\n        self.cancel_token.cancel();\n    }\n\n    \u002F\u002F\u002F Check if the context is still valid.\n    pub fn is_done(&amp;self) -&gt; bool {\n        if self.cancel_token.is_cancelled() {\n            return true;\n        }\n        if let Some(deadline) = self.deadline {\n            return Instant::now() &gt;= deadline;\n        }\n        false\n    }\n\n    \u002F\u002F\u002F Time remaining until deadline, or None if no deadline.\n    pub fn remaining(&amp;self) -&gt; Option&lt;Duration&gt; {\n        self.deadline.map(|d| d.saturating_duration_since(Instant::now()))\n    }\n\n    \u002F\u002F\u002F Get the request ID for tracing.\n    pub fn request_id(&amp;self) -&gt; &amp;str {\n        &amp;self.metadata.request_id\n    }\n}\n\u003C\u002Fcode>\u003C\u002Fpre>\n\u003Ch2 id=\"deadline-aware-operations\">Deadline-Aware Operations\u003C\u002Fh2>\n\u003Cp>Wrap async operations with context-aware timeouts:\u003C\u002Fp>\n\u003Cpre>\u003Ccode class=\"language-rust\">impl Context {\n    \u002F\u002F\u002F Run a future with this context's deadline.\n    \u002F\u002F\u002F Returns Err if the context expires before the future completes.\n    pub async fn run&lt;F, T&gt;(&amp;self, future: F) -&gt; Result&lt;T, ContextError&gt;\n    where\n        F: std::future::Future&lt;Output = T&gt;,\n    {\n        if self.is_done() {\n            return Err(ContextError::DeadlineExceeded);\n        }\n\n        let cancel = self.cancel_token.clone();\n\n        match self.deadline {\n            Some(deadline) =&gt; {\n                let timeout = deadline.saturating_duration_since(Instant::now());\n                tokio::select! {\n                    result = future =&gt; Ok(result),\n                    _ = tokio::time::sleep(timeout) =&gt; {\n                        Err(ContextError::DeadlineExceeded)\n                    }\n                    _ = cancel.cancelled() =&gt; {\n                        Err(ContextError::Cancelled)\n                    }\n                }\n            }\n            None =&gt; {\n                tokio::select! {\n                    result = future =&gt; Ok(result),\n                    _ = cancel.cancelled() =&gt; {\n                        Err(ContextError::Cancelled)\n                    }\n                }\n            }\n        }\n    }\n}\n\n#[derive(Debug, thiserror::Error)]\npub enum ContextError {\n    #[error(\"context deadline exceeded\")]\n    DeadlineExceeded,\n    #[error(\"context cancelled\")]\n    Cancelled,\n}\n\u003C\u002Fcode>\u003C\u002Fpre>\n\u003Cp>Usage in a service:\u003C\u002Fp>\n\u003Cpre>\u003Ccode class=\"language-rust\">async fn fetch_user(ctx: &amp;Context, db: &amp;PgPool, id: i64) -&gt; Result&lt;User&gt; {\n    ctx.run(async {\n        sqlx::query_as::&lt;_, User&gt;(\"SELECT * FROM users WHERE id = $1\")\n            .bind(id)\n            .fetch_one(db)\n            .await\n    }).await\n    .map_err(|e| match e {\n        ContextError::DeadlineExceeded =&gt; {\n            anyhow::anyhow!(\"Database query timed out\")\n        }\n        ContextError::Cancelled =&gt; {\n            anyhow::anyhow!(\"Request was cancelled\")\n        }\n    })?\n    .map_err(Into::into)\n}\n\u003C\u002Fcode>\u003C\u002Fpre>\n\u003Ch2 id=\"cancellation-with-cancellationtoken\">Cancellation with CancellationToken\u003C\u002Fh2>\n\u003Cp>Tokio’s \u003Ccode>CancellationToken\u003C\u002Fcode> supports hierarchical cancellation — cancelling a parent token automatically cancels all children:\u003C\u002Fp>\n\u003Cpre>\u003Ccode class=\"language-rust\">use tokio_util::sync::CancellationToken;\n\nasync fn handle_request(parent_token: CancellationToken) {\n    let child_token = parent_token.child_token();\n\n    \u002F\u002F Spawn a subtask with the child token\n    let handle = tokio::spawn(async move {\n        tokio::select! {\n            result = do_expensive_work() =&gt; {\n                tracing::info!(\"Work completed: {:?}\", result);\n            }\n            _ = child_token.cancelled() =&gt; {\n                tracing::info!(\"Work cancelled\");\n            }\n        }\n    });\n\n    \u002F\u002F If parent is cancelled, child is automatically cancelled\n    tokio::time::sleep(Duration::from_secs(5)).await;\n    parent_token.cancel(); \u002F\u002F Cancels child_token too\n\n    handle.await.unwrap();\n}\n\u003C\u002Fcode>\u003C\u002Fpre>\n\u003Ch3>Graceful Shutdown with Cancellation\u003C\u002Fh3>\n\u003Cp>Use a root cancellation token for coordinated shutdown across all services:\u003C\u002Fp>\n\u003Cpre>\u003Ccode class=\"language-rust\">#[tokio::main]\nasync fn main() -&gt; anyhow::Result&lt;()&gt; {\n    let shutdown = CancellationToken::new();\n\n    \u002F\u002F Spawn services with child tokens\n    let http_server = tokio::spawn(\n        run_http_server(shutdown.child_token())\n    );\n    let block_watcher = tokio::spawn(\n        run_block_watcher(shutdown.child_token())\n    );\n    let metrics_server = tokio::spawn(\n        run_metrics_server(shutdown.child_token())\n    );\n\n    \u002F\u002F Wait for shutdown signal\n    tokio::signal::ctrl_c().await?;\n    tracing::info!(\"Shutdown signal received\");\n\n    \u002F\u002F Cancel all services\n    shutdown.cancel();\n\n    \u002F\u002F Wait for graceful shutdown with a timeout\n    let timeout = Duration::from_secs(30);\n    tokio::time::timeout(timeout, async {\n        let _ = tokio::join!(http_server, block_watcher, metrics_server);\n    }).await.ok();\n\n    tracing::info!(\"Shutdown complete\");\n    Ok(())\n}\n\u003C\u002Fcode>\u003C\u002Fpre>\n\u003Ch2 id=\"tokio-select-for-cancellation-patterns\">tokio::select! for Cancellation Patterns\u003C\u002Fh2>\n\u003Cp>\u003Ccode>tokio::select!\u003C\u002Fcode> is the primary mechanism for responding to cancellation:\u003C\u002Fp>\n\u003Cpre>\u003Ccode class=\"language-rust\">async fn process_with_cancellation(\n    ctx: &amp;Context,\n    items: Vec&lt;Item&gt;,\n) -&gt; Result&lt;Vec&lt;ProcessedItem&gt;&gt; {\n    let mut results = Vec::with_capacity(items.len());\n\n    for item in items {\n        \u002F\u002F Check context before each item\n        if ctx.is_done() {\n            tracing::warn!(\n                processed = results.len(),\n                remaining = items.len() - results.len(),\n                \"Context expired, returning partial results\"\n            );\n            break;\n        }\n\n        let result = ctx.run(process_item(&amp;item)).await?;\n        results.push(result);\n    }\n\n    Ok(results)\n}\n\u003C\u002Fcode>\u003C\u002Fpre>\n\u003Ch3>The Drop Guard Pattern\u003C\u002Fh3>\n\u003Cp>Ensure cleanup happens even when a future is cancelled:\u003C\u002Fp>\n\u003Cpre>\u003Ccode class=\"language-rust\">struct CleanupGuard {\n    resource: Option&lt;TempResource&gt;,\n}\n\nimpl Drop for CleanupGuard {\n    fn drop(&amp;mut self) {\n        if let Some(resource) = self.resource.take() {\n            \u002F\u002F Synchronous cleanup\n            resource.release();\n            tracing::debug!(\"Resource cleaned up on drop\");\n        }\n    }\n}\n\nasync fn work_with_cleanup() -&gt; Result&lt;()&gt; {\n    let resource = acquire_resource().await?;\n    let _guard = CleanupGuard {\n        resource: Some(resource.clone()),\n    };\n\n    \u002F\u002F Even if this future is cancelled (dropped),\n    \u002F\u002F the guard's Drop runs and cleans up\n    do_work(&amp;resource).await?;\n\n    \u002F\u002F Successful completion — prevent double cleanup\n    _guard.resource = None;\n    resource.commit().await?;\n    Ok(())\n}\n\u003C\u002Fcode>\u003C\u002Fpre>\n\u003Ch2 id=\"tracing-spans-across-async-boundaries\">Tracing Spans Across Async Boundaries\u003C\u002Fh2>\n\u003Cp>Tracing spans do not automatically propagate across \u003Ccode>tokio::spawn\u003C\u002Fcode> boundaries. You must explicitly carry them:\u003C\u002Fp>\n\u003Cpre>\u003Ccode class=\"language-rust\">use tracing::{info_span, Instrument};\n\nasync fn handle_request(req: Request) -&gt; Response {\n    let span = info_span!(\n        \"request\",\n        method = %req.method(),\n        path = %req.uri().path(),\n        request_id = %uuid::Uuid::new_v4(),\n    );\n\n    async move {\n        \u002F\u002F This span is active in the current task\n        tracing::info!(\"Processing request\");\n\n        \u002F\u002F Spawn a subtask — must explicitly attach the span\n        let current_span = tracing::Span::current();\n        let handle = tokio::spawn(\n            async move {\n                \u002F\u002F This subtask inherits the parent span\n                tracing::info!(\"Subtask running\");\n                do_background_work().await\n            }\n            .instrument(info_span!(parent: &amp;current_span, \"subtask\"))\n        );\n\n        let result = handle.await?;\n        tracing::info!(\"Request complete\");\n        result\n    }\n    .instrument(span)\n    .await\n}\n\u003C\u002Fcode>\u003C\u002Fpre>\n\u003Ch3>Structured Concurrency\u003C\u002Fh3>\n\u003Cp>Combine context propagation with structured concurrency to ensure all spawned work is tracked and cancellable:\u003C\u002Fp>\n\u003Cpre>\u003Ccode class=\"language-rust\">struct TaskGroup {\n    handles: Vec&lt;tokio::task::JoinHandle&lt;anyhow::Result&lt;()&gt;&gt;&gt;,\n    cancel: CancellationToken,\n}\n\nimpl TaskGroup {\n    fn new(cancel: CancellationToken) -&gt; Self {\n        Self {\n            handles: Vec::new(),\n            cancel,\n        }\n    }\n\n    fn spawn&lt;F&gt;(&amp;mut self, name: &amp;str, future: F)\n    where\n        F: std::future::Future&lt;Output = anyhow::Result&lt;()&gt;&gt; + Send + 'static,\n    {\n        let cancel = self.cancel.child_token();\n        let span = info_span!(\"task\", name = name);\n\n        let handle = tokio::spawn(\n            async move {\n                tokio::select! {\n                    result = future =&gt; result,\n                    _ = cancel.cancelled() =&gt; {\n                        tracing::info!(\"Task cancelled\");\n                        Ok(())\n                    }\n                }\n            }\n            .instrument(span)\n        );\n\n        self.handles.push(handle);\n    }\n\n    async fn join_all(self) -&gt; anyhow::Result&lt;()&gt; {\n        let results = futures::future::join_all(self.handles).await;\n        for result in results {\n            result??;\n        }\n        Ok(())\n    }\n}\n\u003C\u002Fcode>\u003C\u002Fpre>\n\u003Cp>Usage:\u003C\u002Fp>\n\u003Cpre>\u003Ccode class=\"language-rust\">let cancel = CancellationToken::new();\nlet mut group = TaskGroup::new(cancel.clone());\n\ngroup.spawn(\"block_processor\", process_blocks(db.clone()));\ngroup.spawn(\"price_oracle\", update_prices(cache.clone()));\ngroup.spawn(\"metrics\", publish_metrics(registry.clone()));\n\n\u002F\u002F Cancel all tasks on shutdown\ntokio::signal::ctrl_c().await?;\ncancel.cancel();\ngroup.join_all().await?;\n\u003C\u002Fcode>\u003C\u002Fpre>\n\u003Ch2 id=\"putting-it-all-together\">Putting It All Together\u003C\u002Fh2>\n\u003Cp>Here is a complete example combining Context, CancellationToken, and tracing spans in an Axum handler:\u003C\u002Fp>\n\u003Cpre>\u003Ccode class=\"language-rust\">async fn api_handler(\n    State(state): State&lt;AppState&gt;,\n    req: Request,\n) -&gt; Result&lt;Json&lt;ApiResponse&gt;, AppError&gt; {\n    \u002F\u002F Create context with 5-second deadline\n    let ctx = Context::background()\n        .with_timeout(Duration::from_secs(5));\n\n    let span = info_span!(\n        \"api\",\n        request_id = %ctx.request_id(),\n        remaining_ms = ctx.remaining()\n            .map(|d| d.as_millis() as u64)\n            .unwrap_or(0),\n    );\n\n    async move {\n        let user = fetch_user(&amp;ctx, &amp;state.db, user_id).await?;\n        let orders = fetch_orders(&amp;ctx, &amp;state.db, user_id).await?;\n\n        \u002F\u002F Check remaining time before expensive operation\n        if ctx.remaining().map(|d| d &lt; Duration::from_secs(1)).unwrap_or(false) {\n            tracing::warn!(\"Less than 1s remaining, skipping enrichment\");\n            return Ok(Json(ApiResponse::partial(user, orders)));\n        }\n\n        let enriched = enrich_orders(&amp;ctx, &amp;state.cache, orders).await?;\n        Ok(Json(ApiResponse::full(user, enriched)))\n    }\n    .instrument(span)\n    .await\n}\n\u003C\u002Fcode>\u003C\u002Fpre>\n\u003Ch2 id=\"conclusion\">Conclusion\u003C\u002Fh2>\n\u003Cp>Context propagation in async Rust requires explicit effort but delivers enormous benefits: deadline-aware operations that fail fast instead of hanging, hierarchical cancellation that cleanly shuts down complex systems, and tracing spans that survive async boundaries. Build a Context type, use CancellationToken for lifecycle management, instrument with tracing spans, and wrap spawned work in TaskGroups for structured concurrency. These patterns transform scattered async tasks into a coherent, observable, and controllable system.\u003C\u002Fp>\n","en","b0000000-0000-0000-0000-000000000001",true,"2026-03-28T10:44:23.163638Z","Context Propagation in Async Rust — Deadlines, Cancellation, and Tracing","Implement Go-style context propagation in async Rust with deadlines, CancellationToken, tokio::select! cancellation, and tracing span propagation.","async rust context propagation",null,"index, follow",[22,27],{"id":23,"name":24,"slug":25,"created_at":26},"c0000000-0000-0000-0000-000000000022","Performance","performance","2026-03-28T10:44:21.513630Z",{"id":28,"name":29,"slug":30,"created_at":26},"c0000000-0000-0000-0000-000000000001","Rust","rust","Engineering",[33,39,45],{"id":34,"title":35,"slug":36,"excerpt":37,"locale":12,"category_name":31,"published_at":38},"d0200000-0000-0000-0000-000000000003","Why Bali Is Becoming Southeast Asia's Impact-Tech Hub in 2026","why-bali-becoming-southeast-asia-impact-tech-hub-2026","Bali ranks #16 among Southeast Asian startup ecosystems. With a growing concentration of Web3 builders, AI sustainability startups, and eco-travel tech companies, the island is carving a niche as the region's impact-tech capital.","2026-03-28T10:44:37.748283Z",{"id":40,"title":41,"slug":42,"excerpt":43,"locale":12,"category_name":31,"published_at":44},"d0200000-0000-0000-0000-000000000002","ASEAN Data Protection Patchwork: A Developer's Compliance Checklist","asean-data-protection-patchwork-developer-compliance-checklist","Seven ASEAN countries now have comprehensive data protection laws, each with different consent models, localization requirements, and penalty structures. Here is a practical compliance checklist for developers building multi-country applications.","2026-03-28T10:44:37.374741Z",{"id":46,"title":47,"slug":48,"excerpt":49,"locale":12,"category_name":31,"published_at":50},"d0200000-0000-0000-0000-000000000001","Indonesia's $29 Billion Digital Transformation: Opportunities for Software Companies","indonesia-29-billion-digital-transformation-opportunities-software-companies","Indonesia's IT services market is projected to reach $29.03 billion in 2026, up from $24.37 billion in 2025. Cloud infrastructure, AI, e-commerce, and data centers are driving the fastest growth in Southeast Asia.","2026-03-28T10:44:37.349311Z",{"id":13,"name":52,"slug":53,"bio":54,"photo_url":19,"linkedin":19,"role":55,"created_at":56,"updated_at":56},"Open Soft Team","open-soft-team","The engineering team at Open Soft, building premium software solutions from Bali, Indonesia.","Engineering Team","2026-03-28T08:31:22.226811Z"]