[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"article-performance-debugging-pembacaan-database-membunuh-latensi":3},{"article":4,"author":57},{"id":5,"category_id":6,"title":7,"slug":8,"excerpt":9,"content_md":10,"content_html":11,"locale":12,"author_id":13,"published":14,"published_at":15,"meta_title":16,"meta_description":17,"focus_keyword":18,"og_image":19,"canonical_url":19,"robots_meta":20,"created_at":15,"updated_at":15,"tags":21,"category_name":35,"related_articles":36},"d2000000-0000-0000-0000-000000000123","a0000000-0000-0000-0000-000000000026","Performance Debugging — Ketika Pembacaan Database Membunuh Latensi Anda","performance-debugging-pembacaan-database-membunuh-latensi","Mendiagnosis dan memperbaiki masalah latensi database: N+1 query, missing index, connection pool saturation, dan teknik monitoring dengan tracing.","## Gejala: API Lambat, CPU Rendah\n\nAnda memiliki endpoint API yang seharusnya merespons dalam 50ms tetapi memakan 2 detik. CPU server rendah. Memory normal. Masalahnya hampir pasti di database.\n\nArtikel ini membahas diagnosis dan perbaikan masalah performa database paling umum di aplikasi Rust\u002FAxum dengan PostgreSQL.\n\n## Masalah 1: N+1 Query\n\nPattern N+1 terjadi ketika Anda memuat daftar, lalu memuat relasi untuk setiap item secara individual:\n\n```rust\n\u002F\u002F BURUK: N+1 query\nasync fn list_orders(pool: &PgPool) -> Vec\u003COrderWithItems> {\n    let orders = sqlx::query_as::\u003C_, Order>(\"SELECT * FROM orders LIMIT 100\")\n        .fetch_all(pool).await.unwrap();\n    \n    let mut result = Vec::new();\n    for order in orders {\n        \u002F\u002F +1 query per order!\n        let items = sqlx::query_as::\u003C_, OrderItem>(\n            \"SELECT * FROM order_items WHERE order_id = $1\"\n        )\n        .bind(order.id)\n        .fetch_all(pool).await.unwrap();\n        \n        result.push(OrderWithItems { order, items });\n    }\n    result \u002F\u002F 101 query total!\n}\n```\n\n```rust\n\u002F\u002F BAIK: 2 query total\nasync fn list_orders(pool: &PgPool) -> Vec\u003COrderWithItems> {\n    let orders = sqlx::query_as::\u003C_, Order>(\"SELECT * FROM orders LIMIT 100\")\n        .fetch_all(pool).await.unwrap();\n    \n    let order_ids: Vec\u003CUuid> = orders.iter().map(|o| o.id).collect();\n    \n    let items = sqlx::query_as::\u003C_, OrderItem>(\n        \"SELECT * FROM order_items WHERE order_id = ANY($1)\"\n    )\n    .bind(&order_ids)\n    .fetch_all(pool).await.unwrap();\n    \n    \u002F\u002F Group items by order_id\n    let items_map: HashMap\u003CUuid, Vec\u003COrderItem>> = items\n        .into_iter()\n        .into_group_map_by(|i| i.order_id);\n    \n    orders.into_iter().map(|order| {\n        let items = items_map.get(&order.id)\n            .cloned().unwrap_or_default();\n        OrderWithItems { order, items }\n    }).collect()\n}\n```\n\n## Masalah 2: Missing Index\n\n```sql\n-- Query lambat:\nSELECT * FROM articles WHERE locale = 'id' AND published = true ORDER BY published_at DESC;\n\n-- EXPLAIN ANALYZE menunjukkan Seq Scan:\nSeq Scan on articles  (cost=0.00..1234.56 rows=100 width=567)\n  Filter: ((locale = 'id') AND published)\n  Rows Removed by Filter: 9900\n```\n\nPerbaikan:\n```sql\nCREATE INDEX idx_articles_locale_published \n    ON articles (locale, published, published_at DESC)\n    WHERE published = true;\n```\n\nSetelah index:\n```\nIndex Scan using idx_articles_locale_published on articles\n  (cost=0.28..12.34 rows=100 width=567)\n```\n\n## Masalah 3: Connection Pool Saturation\n\nKetika semua koneksi di pool sedang digunakan, query baru harus menunggu:\n\n```rust\n\u002F\u002F Konfigurasi pool\nlet pool = PgPoolOptions::new()\n    .max_connections(10)  \u002F\u002F Terlalu rendah untuk load tinggi!\n    .acquire_timeout(Duration::from_secs(3))\n    .connect(&database_url)\n    .await?;\n```\n\nDiagnosa dengan tracing:\n```rust\n\u002F\u002F Tambahkan metric connection pool\nasync fn health_check(State(pool): State\u003CPgPool>) -> Json\u003CHealthStatus> {\n    let pool_status = pool.size(); \u002F\u002F Koneksi aktif\n    let idle = pool.num_idle();    \u002F\u002F Koneksi menganggur\n    \n    Json(HealthStatus {\n        pool_size: pool_status,\n        pool_idle: idle,\n        pool_max: 10,\n    })\n}\n```\n\nPerbaikan:\n- Tingkatkan `max_connections` (tergantung CPU PostgreSQL)\n- Gunakan PgBouncer untuk connection pooling di level terpisah\n- Optimalkan query yang berjalan lama\n\n## Masalah 4: Full Table Scan pada JOIN\n\n```sql\n-- Lambat: join tanpa index pada FK\nSELECT a.*, c.name as category_name\nFROM articles a\nJOIN categories c ON c.id = a.category_id\nWHERE a.locale = 'id'\nORDER BY a.published_at DESC\nLIMIT 20;\n```\n\nPastikan FK memiliki index:\n```sql\nCREATE INDEX idx_articles_category_id ON articles(category_id);\n```\n\n## Monitoring dengan tracing\n\n```rust\nuse tracing::instrument;\n\n#[instrument(skip(pool))]\nasync fn get_articles(\n    pool: &PgPool,\n    locale: &str,\n    page: i64,\n) -> Result\u003CVec\u003CArticle>, DbError> {\n    let start = std::time::Instant::now();\n    \n    let articles = sqlx::query_as::\u003C_, Article>(\n        \"SELECT * FROM articles WHERE locale = $1 AND published = true \\\n         ORDER BY published_at DESC LIMIT 20 OFFSET $2\"\n    )\n    .bind(locale)\n    .bind((page - 1) * 20)\n    .fetch_all(pool)\n    .await?;\n    \n    let elapsed = start.elapsed();\n    if elapsed > Duration::from_millis(100) {\n        tracing::warn!(\n            query = \"get_articles\",\n            locale = locale,\n            duration_ms = elapsed.as_millis(),\n            \"Query lambat terdeteksi\"\n        );\n    }\n    \n    Ok(articles)\n}\n```\n\n## Checklist Performance Database\n\n1. **Periksa EXPLAIN ANALYZE** untuk semua query yang lambat\n2. **Buat index** untuk kolom di WHERE, JOIN, dan ORDER BY\n3. **Hindari N+1** — Gunakan batch query atau JOIN\n4. **Monitor connection pool** — Track utilisasi dan wait time\n5. **Cache hasil** — Untuk data yang jarang berubah\n6. **Pagination** — Selalu gunakan LIMIT\u002FOFFSET atau keyset pagination\n7. **Gunakan RETURNING** — Hindari query SELECT setelah INSERT\u002FUPDATE\n\n## Kesimpulan\n\nMasalah performa database adalah penyebab paling umum latensi API tinggi. N+1 query, missing index, dan connection pool saturation masing-masing bisa membuat endpoint berjalan 10-100x lebih lambat. Diagnosis yang sistematik dengan EXPLAIN ANALYZE, monitoring pool, dan slow query logging membuat perbaikan menjadi mudah.","\u003Ch2 id=\"gejala-api-lambat-cpu-rendah\">Gejala: API Lambat, CPU Rendah\u003C\u002Fh2>\n\u003Cp>Anda memiliki endpoint API yang seharusnya merespons dalam 50ms tetapi memakan 2 detik. CPU server rendah. Memory normal. Masalahnya hampir pasti di database.\u003C\u002Fp>\n\u003Cp>Artikel ini membahas diagnosis dan perbaikan masalah performa database paling umum di aplikasi Rust\u002FAxum dengan PostgreSQL.\u003C\u002Fp>\n\u003Ch2 id=\"masalah-1-n-1-query\">Masalah 1: N+1 Query\u003C\u002Fh2>\n\u003Cp>Pattern N+1 terjadi ketika Anda memuat daftar, lalu memuat relasi untuk setiap item secara individual:\u003C\u002Fp>\n\u003Cpre>\u003Ccode class=\"language-rust\">\u002F\u002F BURUK: N+1 query\nasync fn list_orders(pool: &amp;PgPool) -&gt; Vec&lt;OrderWithItems&gt; {\n    let orders = sqlx::query_as::&lt;_, Order&gt;(\"SELECT * FROM orders LIMIT 100\")\n        .fetch_all(pool).await.unwrap();\n    \n    let mut result = Vec::new();\n    for order in orders {\n        \u002F\u002F +1 query per order!\n        let items = sqlx::query_as::&lt;_, OrderItem&gt;(\n            \"SELECT * FROM order_items WHERE order_id = $1\"\n        )\n        .bind(order.id)\n        .fetch_all(pool).await.unwrap();\n        \n        result.push(OrderWithItems { order, items });\n    }\n    result \u002F\u002F 101 query total!\n}\n\u003C\u002Fcode>\u003C\u002Fpre>\n\u003Cpre>\u003Ccode class=\"language-rust\">\u002F\u002F BAIK: 2 query total\nasync fn list_orders(pool: &amp;PgPool) -&gt; Vec&lt;OrderWithItems&gt; {\n    let orders = sqlx::query_as::&lt;_, Order&gt;(\"SELECT * FROM orders LIMIT 100\")\n        .fetch_all(pool).await.unwrap();\n    \n    let order_ids: Vec&lt;Uuid&gt; = orders.iter().map(|o| o.id).collect();\n    \n    let items = sqlx::query_as::&lt;_, OrderItem&gt;(\n        \"SELECT * FROM order_items WHERE order_id = ANY($1)\"\n    )\n    .bind(&amp;order_ids)\n    .fetch_all(pool).await.unwrap();\n    \n    \u002F\u002F Group items by order_id\n    let items_map: HashMap&lt;Uuid, Vec&lt;OrderItem&gt;&gt; = items\n        .into_iter()\n        .into_group_map_by(|i| i.order_id);\n    \n    orders.into_iter().map(|order| {\n        let items = items_map.get(&amp;order.id)\n            .cloned().unwrap_or_default();\n        OrderWithItems { order, items }\n    }).collect()\n}\n\u003C\u002Fcode>\u003C\u002Fpre>\n\u003Ch2 id=\"masalah-2-missing-index\">Masalah 2: Missing Index\u003C\u002Fh2>\n\u003Cpre>\u003Ccode class=\"language-sql\">-- Query lambat:\nSELECT * FROM articles WHERE locale = 'id' AND published = true ORDER BY published_at DESC;\n\n-- EXPLAIN ANALYZE menunjukkan Seq Scan:\nSeq Scan on articles  (cost=0.00..1234.56 rows=100 width=567)\n  Filter: ((locale = 'id') AND published)\n  Rows Removed by Filter: 9900\n\u003C\u002Fcode>\u003C\u002Fpre>\n\u003Cp>Perbaikan:\u003C\u002Fp>\n\u003Cpre>\u003Ccode class=\"language-sql\">CREATE INDEX idx_articles_locale_published \n    ON articles (locale, published, published_at DESC)\n    WHERE published = true;\n\u003C\u002Fcode>\u003C\u002Fpre>\n\u003Cp>Setelah index:\u003C\u002Fp>\n\u003Cpre>\u003Ccode>Index Scan using idx_articles_locale_published on articles\n  (cost=0.28..12.34 rows=100 width=567)\n\u003C\u002Fcode>\u003C\u002Fpre>\n\u003Ch2 id=\"masalah-3-connection-pool-saturation\">Masalah 3: Connection Pool Saturation\u003C\u002Fh2>\n\u003Cp>Ketika semua koneksi di pool sedang digunakan, query baru harus menunggu:\u003C\u002Fp>\n\u003Cpre>\u003Ccode class=\"language-rust\">\u002F\u002F Konfigurasi pool\nlet pool = PgPoolOptions::new()\n    .max_connections(10)  \u002F\u002F Terlalu rendah untuk load tinggi!\n    .acquire_timeout(Duration::from_secs(3))\n    .connect(&amp;database_url)\n    .await?;\n\u003C\u002Fcode>\u003C\u002Fpre>\n\u003Cp>Diagnosa dengan tracing:\u003C\u002Fp>\n\u003Cpre>\u003Ccode class=\"language-rust\">\u002F\u002F Tambahkan metric connection pool\nasync fn health_check(State(pool): State&lt;PgPool&gt;) -&gt; Json&lt;HealthStatus&gt; {\n    let pool_status = pool.size(); \u002F\u002F Koneksi aktif\n    let idle = pool.num_idle();    \u002F\u002F Koneksi menganggur\n    \n    Json(HealthStatus {\n        pool_size: pool_status,\n        pool_idle: idle,\n        pool_max: 10,\n    })\n}\n\u003C\u002Fcode>\u003C\u002Fpre>\n\u003Cp>Perbaikan:\u003C\u002Fp>\n\u003Cul>\n\u003Cli>Tingkatkan \u003Ccode>max_connections\u003C\u002Fcode> (tergantung CPU PostgreSQL)\u003C\u002Fli>\n\u003Cli>Gunakan PgBouncer untuk connection pooling di level terpisah\u003C\u002Fli>\n\u003Cli>Optimalkan query yang berjalan lama\u003C\u002Fli>\n\u003C\u002Ful>\n\u003Ch2 id=\"masalah-4-full-table-scan-pada-join\">Masalah 4: Full Table Scan pada JOIN\u003C\u002Fh2>\n\u003Cpre>\u003Ccode class=\"language-sql\">-- Lambat: join tanpa index pada FK\nSELECT a.*, c.name as category_name\nFROM articles a\nJOIN categories c ON c.id = a.category_id\nWHERE a.locale = 'id'\nORDER BY a.published_at DESC\nLIMIT 20;\n\u003C\u002Fcode>\u003C\u002Fpre>\n\u003Cp>Pastikan FK memiliki index:\u003C\u002Fp>\n\u003Cpre>\u003Ccode class=\"language-sql\">CREATE INDEX idx_articles_category_id ON articles(category_id);\n\u003C\u002Fcode>\u003C\u002Fpre>\n\u003Ch2 id=\"monitoring-dengan-tracing\">Monitoring dengan tracing\u003C\u002Fh2>\n\u003Cpre>\u003Ccode class=\"language-rust\">use tracing::instrument;\n\n#[instrument(skip(pool))]\nasync fn get_articles(\n    pool: &amp;PgPool,\n    locale: &amp;str,\n    page: i64,\n) -&gt; Result&lt;Vec&lt;Article&gt;, DbError&gt; {\n    let start = std::time::Instant::now();\n    \n    let articles = sqlx::query_as::&lt;_, Article&gt;(\n        \"SELECT * FROM articles WHERE locale = $1 AND published = true \\\n         ORDER BY published_at DESC LIMIT 20 OFFSET $2\"\n    )\n    .bind(locale)\n    .bind((page - 1) * 20)\n    .fetch_all(pool)\n    .await?;\n    \n    let elapsed = start.elapsed();\n    if elapsed &gt; Duration::from_millis(100) {\n        tracing::warn!(\n            query = \"get_articles\",\n            locale = locale,\n            duration_ms = elapsed.as_millis(),\n            \"Query lambat terdeteksi\"\n        );\n    }\n    \n    Ok(articles)\n}\n\u003C\u002Fcode>\u003C\u002Fpre>\n\u003Ch2 id=\"checklist-performance-database\">Checklist Performance Database\u003C\u002Fh2>\n\u003Col>\n\u003Cli>\u003Cstrong>Periksa EXPLAIN ANALYZE\u003C\u002Fstrong> untuk semua query yang lambat\u003C\u002Fli>\n\u003Cli>\u003Cstrong>Buat index\u003C\u002Fstrong> untuk kolom di WHERE, JOIN, dan ORDER BY\u003C\u002Fli>\n\u003Cli>\u003Cstrong>Hindari N+1\u003C\u002Fstrong> — Gunakan batch query atau JOIN\u003C\u002Fli>\n\u003Cli>\u003Cstrong>Monitor connection pool\u003C\u002Fstrong> — Track utilisasi dan wait time\u003C\u002Fli>\n\u003Cli>\u003Cstrong>Cache hasil\u003C\u002Fstrong> — Untuk data yang jarang berubah\u003C\u002Fli>\n\u003Cli>\u003Cstrong>Pagination\u003C\u002Fstrong> — Selalu gunakan LIMIT\u002FOFFSET atau keyset pagination\u003C\u002Fli>\n\u003Cli>\u003Cstrong>Gunakan RETURNING\u003C\u002Fstrong> — Hindari query SELECT setelah INSERT\u002FUPDATE\u003C\u002Fli>\n\u003C\u002Fol>\n\u003Ch2 id=\"kesimpulan\">Kesimpulan\u003C\u002Fh2>\n\u003Cp>Masalah performa database adalah penyebab paling umum latensi API tinggi. N+1 query, missing index, dan connection pool saturation masing-masing bisa membuat endpoint berjalan 10-100x lebih lambat. Diagnosis yang sistematik dengan EXPLAIN ANALYZE, monitoring pool, dan slow query logging membuat perbaikan menjadi mudah.\u003C\u002Fp>\n","id","b0000000-0000-0000-0000-000000000001",true,"2026-03-28T10:44:25.138897Z","Performance Debugging — Pembacaan Database Membunuh Latensi","Diagnosis performa database: N+1 query, missing index, connection pool saturation, dan monitoring dengan tracing di Rust.","performa database Rust",null,"index, follow",[22,27,31],{"id":23,"name":24,"slug":25,"created_at":26},"c0000000-0000-0000-0000-000000000022","Performance","performance","2026-03-28T10:44:21.513630Z",{"id":28,"name":29,"slug":30,"created_at":26},"c0000000-0000-0000-0000-000000000005","PostgreSQL","postgresql",{"id":32,"name":33,"slug":34,"created_at":26},"c0000000-0000-0000-0000-000000000001","Rust","rust","Rekayasa",[37,44,51],{"id":38,"title":39,"slug":40,"excerpt":41,"locale":12,"category_name":42,"published_at":43},"d0000000-0000-0000-0000-000000000642","WASI 0.3 dan Kematian Cold Start: Wasm Sisi Server di Produksi","wasi-0-3-kematian-cold-start-wasm-sisi-server-di-produksi","WASI 0.3 dirilis pada Februari 2026 dengan async I\u002FO native, tipe stream, dan dukungan socket penuh. WebAssembly sisi server kini menghadirkan cold start dalam hitungan mikrodetik, dan setiap penyedia cloud besar menawarkan Wasm serverless.","DevOps","2026-03-28T10:44:47.445780Z",{"id":45,"title":46,"slug":47,"excerpt":48,"locale":12,"category_name":49,"published_at":50},"d0000000-0000-0000-0000-000000000620","Stack Backend Modern 2026: Rust + PostgreSQL 18 + Wasm + eBPF","stack-backend-modern-2026-rust-postgresql-wasm-ebpf","Empat teknologi konvergen untuk mendefinisikan ulang infrastruktur backend di 2026: Rust menghilangkan overhead garbage collection dan mengurangi jumlah container hingga 3x, PostgreSQL 18 menggantikan database khusus, WASI 0.3 memberikan cold start mikrodetik untuk fungsi serverless, dan eBPF memungkinkan observabilitas tanpa instrumentasi dengan biaya yang jauh lebih rendah dari monitoring tradisional.","Engineering","2026-03-28T10:44:45.804120Z",{"id":52,"title":53,"slug":54,"excerpt":55,"locale":12,"category_name":49,"published_at":56},"d0000000-0000-0000-0000-000000000619","Neon vs Turso vs PlanetScale: Memilih Database Serverless di 2026","neon-vs-turso-vs-planetscale-perbandingan-database-serverless-2026","Perbandingan praktis dari tiga platform database serverless terkemuka di 2026. Neon mendominasi untuk beban kerja PostgreSQL dengan branching dan autoscaling, Turso unggul untuk deployment SQLite edge-native, dan PlanetScale tetap menjadi pilihan terbaik untuk scaling serverless yang kompatibel dengan MySQL.","2026-03-28T10:44:45.797681Z",{"id":13,"name":58,"slug":59,"bio":60,"photo_url":19,"linkedin":19,"role":61,"created_at":62,"updated_at":62},"Open Soft Team","open-soft-team","The engineering team at Open Soft, building premium software solutions from Bali, Indonesia.","Engineering Team","2026-03-28T08:31:22.226811Z"]