[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"article-deep-evm-25-postgresql-table-partitioning":3},{"article":4,"author":55},{"id":5,"category_id":6,"title":7,"slug":8,"excerpt":9,"content_md":10,"content_html":11,"locale":12,"author_id":13,"published":14,"published_at":15,"meta_title":16,"meta_description":17,"focus_keyword":18,"og_image":19,"canonical_url":19,"robots_meta":20,"created_at":15,"updated_at":15,"tags":21,"category_name":24,"related_articles":35},"d0000000-0000-0000-0000-000000000125","a0000000-0000-0000-0000-000000000005","Deep EVM #25: PostgreSQL Table Partitioning — When Your Table Hits 10M+ Rows","deep-evm-25-postgresql-table-partitioning","A practical guide to PostgreSQL table partitioning for large tables. Covers range, list, and hash partitioning with real examples, migration strategies, and query planning.","## When to Partition\n\nYou have a `transactions` table that started small and is now at 34 million rows. Queries that used to take 50ms now take 5 seconds. VACUUM runs for hours and blocks autovacuum on other tables. Index rebuilds take the table offline for minutes. Your database is not slow — your table is too big for a single heap file.\n\nTable partitioning splits a logical table into multiple physical tables (partitions). PostgreSQL's query planner automatically routes queries to the correct partitions, scanning only the data needed.\n\n### Rules of Thumb for Partitioning\n\n- **Partition when:** Table exceeds 10M rows, or queries consistently scan >20% of the table, or VACUUM cannot keep up with dead tuples\n- **Do NOT partition when:** Table is under 1M rows (overhead exceeds benefit), queries always hit an index (partition pruning adds nothing), write patterns are random (no natural partition key)\n\n## Partition Strategies\n\nPostgreSQL supports three native partitioning strategies:\n\n### Range Partitioning\n\nSplit by value ranges. Ideal for time-series data:\n\n```sql\nCREATE TABLE transactions (\n    id          BIGINT GENERATED ALWAYS AS IDENTITY,\n    block_number BIGINT NOT NULL,\n    tx_hash     BYTEA NOT NULL,\n    from_addr   BYTEA NOT NULL,\n    to_addr     BYTEA,\n    value_wei   NUMERIC(78, 0) NOT NULL,\n    gas_used    BIGINT NOT NULL,\n    created_at  TIMESTAMPTZ NOT NULL DEFAULT NOW()\n) PARTITION BY RANGE (block_number);\n\n-- Create partitions for every 1M blocks\nCREATE TABLE transactions_0_1m\n    PARTITION OF transactions\n    FOR VALUES FROM (0) TO (1000000);\n\nCREATE TABLE transactions_1m_2m\n    PARTITION OF transactions\n    FOR VALUES FROM (1000000) TO (2000000);\n\nCREATE TABLE transactions_2m_3m\n    PARTITION OF transactions\n    FOR VALUES FROM (2000000) TO (3000000);\n\n-- Continue for each range...\n```\n\nQueries that filter on `block_number` automatically prune irrelevant partitions:\n\n```sql\n-- Only scans transactions_18m_19m partition\nSELECT * FROM transactions\nWHERE block_number BETWEEN 18000000 AND 18100000;\n\n-- EXPLAIN shows partition pruning:\n-- Append\n--   -> Index Scan on transactions_18m_19m\n--        Index Cond: (block_number >= 18000000 AND block_number \u003C= 18100000)\n```\n\n### List Partitioning\n\nSplit by discrete values. Ideal for categorical data:\n\n```sql\nCREATE TABLE events (\n    id          BIGINT GENERATED ALWAYS AS IDENTITY,\n    chain_id    INT NOT NULL,\n    contract    BYTEA NOT NULL,\n    event_name  TEXT NOT NULL,\n    data        JSONB NOT NULL,\n    block_number BIGINT NOT NULL,\n    created_at  TIMESTAMPTZ NOT NULL DEFAULT NOW()\n) PARTITION BY LIST (chain_id);\n\nCREATE TABLE events_ethereum\n    PARTITION OF events FOR VALUES IN (1);\n\nCREATE TABLE events_polygon\n    PARTITION OF events FOR VALUES IN (137);\n\nCREATE TABLE events_arbitrum\n    PARTITION OF events FOR VALUES IN (42161);\n\nCREATE TABLE events_optimism\n    PARTITION OF events FOR VALUES IN (10);\n\nCREATE TABLE events_base\n    PARTITION OF events FOR VALUES IN (8453);\n```\n\n### Hash Partitioning\n\nSplit by hash of a column. Ensures even distribution when there is no natural range or list:\n\n```sql\nCREATE TABLE addresses (\n    address     BYTEA PRIMARY KEY,\n    first_seen  BIGINT NOT NULL,\n    tx_count    BIGINT NOT NULL DEFAULT 0,\n    balance_wei NUMERIC(78, 0) NOT NULL DEFAULT 0\n) PARTITION BY HASH (address);\n\n-- Create 16 partitions\nCREATE TABLE addresses_p0 PARTITION OF addresses\n    FOR VALUES WITH (MODULUS 16, REMAINDER 0);\nCREATE TABLE addresses_p1 PARTITION OF addresses\n    FOR VALUES WITH (MODULUS 16, REMAINDER 1);\n-- ... through p15\n```\n\n## Real Example: 34M Rows to 20 Partitions\n\nLet us walk through partitioning an existing `transactions` table with 34 million rows.\n\n### Step 1: Create the Partitioned Table\n\n```sql\n-- Create new partitioned table\nCREATE TABLE transactions_partitioned (\n    LIKE transactions INCLUDING ALL\n) PARTITION BY RANGE (block_number);\n\n-- Create 20 partitions of ~1.7M rows each\nDO $$\nDECLARE\n    start_block BIGINT;\nBEGIN\n    FOR i IN 0..19 LOOP\n        start_block := i * 1000000;\n        EXECUTE format(\n            'CREATE TABLE transactions_p%s PARTITION OF transactions_partitioned\n             FOR VALUES FROM (%s) TO (%s)',\n            i, start_block, start_block + 1000000\n        );\n    END LOOP;\nEND $$;\n\n-- Add a default partition for future data\nCREATE TABLE transactions_default\n    PARTITION OF transactions_partitioned DEFAULT;\n```\n\n### Step 2: Migrate Data\n\n```sql\n-- Copy data in batches to avoid locking\nDO $$\nDECLARE\n    batch_size BIGINT := 100000;\n    max_id BIGINT;\n    current_id BIGINT := 0;\nBEGIN\n    SELECT MAX(id) INTO max_id FROM transactions;\n\n    WHILE current_id \u003C max_id LOOP\n        INSERT INTO transactions_partitioned\n        SELECT * FROM transactions\n        WHERE id > current_id AND id \u003C= current_id + batch_size;\n\n        current_id := current_id + batch_size;\n        RAISE NOTICE 'Migrated up to id %', current_id;\n        COMMIT;\n    END LOOP;\nEND $$;\n```\n\n### Step 3: Swap Tables\n\n```sql\n-- Atomic swap\nBEGIN;\nALTER TABLE transactions RENAME TO transactions_old;\nALTER TABLE transactions_partitioned RENAME TO transactions;\nCOMMIT;\n\n-- Verify, then drop old table\n-- DROP TABLE transactions_old;\n```\n\n### Step 4: Create Indexes on Partitions\n\n```sql\n-- Creating an index on the parent table automatically creates\n-- identical indexes on all partitions\nCREATE INDEX CONCURRENTLY idx_transactions_block\n    ON transactions (block_number);\n\nCREATE INDEX CONCURRENTLY idx_transactions_from\n    ON transactions (from_addr, block_number);\n\nCREATE INDEX CONCURRENTLY idx_transactions_to\n    ON transactions (to_addr, block_number);\n```\n\n## Partition Pruning in EXPLAIN\n\nVerify that the query planner prunes partitions:\n\n```sql\nEXPLAIN (ANALYZE, BUFFERS)\nSELECT * FROM transactions\nWHERE block_number BETWEEN 18000000 AND 18500000;\n```\n\nGood output (only relevant partitions scanned):\n\n```\nAppend (cost=0.43..1234.56 rows=50000 width=120)\n  -> Index Scan using transactions_p18_block_number_idx\n     on transactions_p18 (actual time=0.1..12.3 rows=50000)\n     Index Cond: block_number >= 18000000 AND block_number \u003C= 18500000\n     Buffers: shared hit=423\nPlanning Time: 0.5ms\nExecution Time: 15.2ms\n```\n\nBad output (all partitions scanned):\n\n```\nAppend (cost=0.00..999999.99 rows=34000000 width=120)\n  -> Seq Scan on transactions_p0 ...\n  -> Seq Scan on transactions_p1 ...\n  -> Seq Scan on transactions_p2 ...\n  ... (all 20 partitions)\n```\n\nIf you see all partitions being scanned, the WHERE clause does not match the partition key. Fix the query or add the partition key to the filter.\n\n## Automating Partition Creation\n\nFor time-series or block-number-based partitions, automate creation with a cron job or PostgreSQL function:\n\n```sql\nCREATE OR REPLACE FUNCTION create_next_partition()\nRETURNS void AS $$\nDECLARE\n    max_block BIGINT;\n    next_start BIGINT;\n    next_end BIGINT;\n    partition_name TEXT;\nBEGIN\n    SELECT MAX(block_number) INTO max_block FROM transactions;\n    next_start := (max_block \u002F 1000000 + 1) * 1000000;\n    next_end := next_start + 1000000;\n    partition_name := format('transactions_p%s', next_start \u002F 1000000);\n\n    EXECUTE format(\n        'CREATE TABLE IF NOT EXISTS %I PARTITION OF transactions\n         FOR VALUES FROM (%s) TO (%s)',\n        partition_name, next_start, next_end\n    );\n\n    RAISE NOTICE 'Created partition % for blocks % to %',\n        partition_name, next_start, next_end;\nEND;\n$$ LANGUAGE plpgsql;\n```\n\n## Performance Comparison\n\n| Query | Unpartitioned (34M) | Partitioned (20 x 1.7M) | Speedup |\n|-------|--------------------|--------------------------|---------|\n| Point lookup by block | 230ms | 12ms | 19x |\n| Range scan (500K blocks) | 4.8s | 180ms | 27x |\n| COUNT(*) full table | 45s | 45s | 1x |\n| VACUUM | 2.1 hours | 6.3 min\u002Fpartition | Parallel |\n| Index rebuild | 12 min (locks table) | 36s\u002Fpartition | No lock |\n\nPartitioning dramatically improves queries that filter on the partition key. Full-table scans see no improvement (all partitions are scanned). The biggest operational win is VACUUM and index maintenance, which can now run on individual partitions without affecting the others.\n\n## Common Pitfalls\n\n### 1. Missing Partition Key in WHERE Clause\n\nIf your query does not filter on the partition key, PostgreSQL scans all partitions. Always include the partition key in WHERE clauses.\n\n### 2. Too Many Partitions\n\nEach partition has overhead (file descriptors, planner time). More than 100 partitions can slow down query planning. Aim for partitions of 1-10M rows each.\n\n### 3. Forgetting the Default Partition\n\nWithout a default partition, inserts with partition key values outside defined ranges will fail with an error. Always create a default partition as a safety net.\n\n### 4. Cross-Partition Foreign Keys\n\nPartitioned tables cannot be referenced by foreign keys in PostgreSQL. If other tables reference your partitioned table, you need application-level referential integrity.\n\n## Conclusion\n\nPostgreSQL table partitioning is a powerful tool for managing large tables. Range partitioning is ideal for time-series and block-based data, list partitioning for categorical splits, and hash partitioning for even distribution. Start partitioning when your table exceeds 10M rows and queries consistently scan large portions. The key to success: choose a partition key that matches your most common query patterns, keep partition counts reasonable (10-50), and always verify partition pruning with EXPLAIN ANALYZE.","\u003Ch2 id=\"when-to-partition\">When to Partition\u003C\u002Fh2>\n\u003Cp>You have a \u003Ccode>transactions\u003C\u002Fcode> table that started small and is now at 34 million rows. Queries that used to take 50ms now take 5 seconds. VACUUM runs for hours and blocks autovacuum on other tables. Index rebuilds take the table offline for minutes. Your database is not slow — your table is too big for a single heap file.\u003C\u002Fp>\n\u003Cp>Table partitioning splits a logical table into multiple physical tables (partitions). PostgreSQL’s query planner automatically routes queries to the correct partitions, scanning only the data needed.\u003C\u002Fp>\n\u003Ch3>Rules of Thumb for Partitioning\u003C\u002Fh3>\n\u003Cul>\n\u003Cli>\u003Cstrong>Partition when:\u003C\u002Fstrong> Table exceeds 10M rows, or queries consistently scan &gt;20% of the table, or VACUUM cannot keep up with dead tuples\u003C\u002Fli>\n\u003Cli>\u003Cstrong>Do NOT partition when:\u003C\u002Fstrong> Table is under 1M rows (overhead exceeds benefit), queries always hit an index (partition pruning adds nothing), write patterns are random (no natural partition key)\u003C\u002Fli>\n\u003C\u002Ful>\n\u003Ch2 id=\"partition-strategies\">Partition Strategies\u003C\u002Fh2>\n\u003Cp>PostgreSQL supports three native partitioning strategies:\u003C\u002Fp>\n\u003Ch3>Range Partitioning\u003C\u002Fh3>\n\u003Cp>Split by value ranges. Ideal for time-series data:\u003C\u002Fp>\n\u003Cpre>\u003Ccode class=\"language-sql\">CREATE TABLE transactions (\n    id          BIGINT GENERATED ALWAYS AS IDENTITY,\n    block_number BIGINT NOT NULL,\n    tx_hash     BYTEA NOT NULL,\n    from_addr   BYTEA NOT NULL,\n    to_addr     BYTEA,\n    value_wei   NUMERIC(78, 0) NOT NULL,\n    gas_used    BIGINT NOT NULL,\n    created_at  TIMESTAMPTZ NOT NULL DEFAULT NOW()\n) PARTITION BY RANGE (block_number);\n\n-- Create partitions for every 1M blocks\nCREATE TABLE transactions_0_1m\n    PARTITION OF transactions\n    FOR VALUES FROM (0) TO (1000000);\n\nCREATE TABLE transactions_1m_2m\n    PARTITION OF transactions\n    FOR VALUES FROM (1000000) TO (2000000);\n\nCREATE TABLE transactions_2m_3m\n    PARTITION OF transactions\n    FOR VALUES FROM (2000000) TO (3000000);\n\n-- Continue for each range...\n\u003C\u002Fcode>\u003C\u002Fpre>\n\u003Cp>Queries that filter on \u003Ccode>block_number\u003C\u002Fcode> automatically prune irrelevant partitions:\u003C\u002Fp>\n\u003Cpre>\u003Ccode class=\"language-sql\">-- Only scans transactions_18m_19m partition\nSELECT * FROM transactions\nWHERE block_number BETWEEN 18000000 AND 18100000;\n\n-- EXPLAIN shows partition pruning:\n-- Append\n--   -&gt; Index Scan on transactions_18m_19m\n--        Index Cond: (block_number &gt;= 18000000 AND block_number &lt;= 18100000)\n\u003C\u002Fcode>\u003C\u002Fpre>\n\u003Ch3>List Partitioning\u003C\u002Fh3>\n\u003Cp>Split by discrete values. Ideal for categorical data:\u003C\u002Fp>\n\u003Cpre>\u003Ccode class=\"language-sql\">CREATE TABLE events (\n    id          BIGINT GENERATED ALWAYS AS IDENTITY,\n    chain_id    INT NOT NULL,\n    contract    BYTEA NOT NULL,\n    event_name  TEXT NOT NULL,\n    data        JSONB NOT NULL,\n    block_number BIGINT NOT NULL,\n    created_at  TIMESTAMPTZ NOT NULL DEFAULT NOW()\n) PARTITION BY LIST (chain_id);\n\nCREATE TABLE events_ethereum\n    PARTITION OF events FOR VALUES IN (1);\n\nCREATE TABLE events_polygon\n    PARTITION OF events FOR VALUES IN (137);\n\nCREATE TABLE events_arbitrum\n    PARTITION OF events FOR VALUES IN (42161);\n\nCREATE TABLE events_optimism\n    PARTITION OF events FOR VALUES IN (10);\n\nCREATE TABLE events_base\n    PARTITION OF events FOR VALUES IN (8453);\n\u003C\u002Fcode>\u003C\u002Fpre>\n\u003Ch3>Hash Partitioning\u003C\u002Fh3>\n\u003Cp>Split by hash of a column. Ensures even distribution when there is no natural range or list:\u003C\u002Fp>\n\u003Cpre>\u003Ccode class=\"language-sql\">CREATE TABLE addresses (\n    address     BYTEA PRIMARY KEY,\n    first_seen  BIGINT NOT NULL,\n    tx_count    BIGINT NOT NULL DEFAULT 0,\n    balance_wei NUMERIC(78, 0) NOT NULL DEFAULT 0\n) PARTITION BY HASH (address);\n\n-- Create 16 partitions\nCREATE TABLE addresses_p0 PARTITION OF addresses\n    FOR VALUES WITH (MODULUS 16, REMAINDER 0);\nCREATE TABLE addresses_p1 PARTITION OF addresses\n    FOR VALUES WITH (MODULUS 16, REMAINDER 1);\n-- ... through p15\n\u003C\u002Fcode>\u003C\u002Fpre>\n\u003Ch2 id=\"real-example-34m-rows-to-20-partitions\">Real Example: 34M Rows to 20 Partitions\u003C\u002Fh2>\n\u003Cp>Let us walk through partitioning an existing \u003Ccode>transactions\u003C\u002Fcode> table with 34 million rows.\u003C\u002Fp>\n\u003Ch3>Step 1: Create the Partitioned Table\u003C\u002Fh3>\n\u003Cpre>\u003Ccode class=\"language-sql\">-- Create new partitioned table\nCREATE TABLE transactions_partitioned (\n    LIKE transactions INCLUDING ALL\n) PARTITION BY RANGE (block_number);\n\n-- Create 20 partitions of ~1.7M rows each\nDO $$\nDECLARE\n    start_block BIGINT;\nBEGIN\n    FOR i IN 0..19 LOOP\n        start_block := i * 1000000;\n        EXECUTE format(\n            'CREATE TABLE transactions_p%s PARTITION OF transactions_partitioned\n             FOR VALUES FROM (%s) TO (%s)',\n            i, start_block, start_block + 1000000\n        );\n    END LOOP;\nEND $$;\n\n-- Add a default partition for future data\nCREATE TABLE transactions_default\n    PARTITION OF transactions_partitioned DEFAULT;\n\u003C\u002Fcode>\u003C\u002Fpre>\n\u003Ch3>Step 2: Migrate Data\u003C\u002Fh3>\n\u003Cpre>\u003Ccode class=\"language-sql\">-- Copy data in batches to avoid locking\nDO $$\nDECLARE\n    batch_size BIGINT := 100000;\n    max_id BIGINT;\n    current_id BIGINT := 0;\nBEGIN\n    SELECT MAX(id) INTO max_id FROM transactions;\n\n    WHILE current_id &lt; max_id LOOP\n        INSERT INTO transactions_partitioned\n        SELECT * FROM transactions\n        WHERE id &gt; current_id AND id &lt;= current_id + batch_size;\n\n        current_id := current_id + batch_size;\n        RAISE NOTICE 'Migrated up to id %', current_id;\n        COMMIT;\n    END LOOP;\nEND $$;\n\u003C\u002Fcode>\u003C\u002Fpre>\n\u003Ch3>Step 3: Swap Tables\u003C\u002Fh3>\n\u003Cpre>\u003Ccode class=\"language-sql\">-- Atomic swap\nBEGIN;\nALTER TABLE transactions RENAME TO transactions_old;\nALTER TABLE transactions_partitioned RENAME TO transactions;\nCOMMIT;\n\n-- Verify, then drop old table\n-- DROP TABLE transactions_old;\n\u003C\u002Fcode>\u003C\u002Fpre>\n\u003Ch3>Step 4: Create Indexes on Partitions\u003C\u002Fh3>\n\u003Cpre>\u003Ccode class=\"language-sql\">-- Creating an index on the parent table automatically creates\n-- identical indexes on all partitions\nCREATE INDEX CONCURRENTLY idx_transactions_block\n    ON transactions (block_number);\n\nCREATE INDEX CONCURRENTLY idx_transactions_from\n    ON transactions (from_addr, block_number);\n\nCREATE INDEX CONCURRENTLY idx_transactions_to\n    ON transactions (to_addr, block_number);\n\u003C\u002Fcode>\u003C\u002Fpre>\n\u003Ch2 id=\"partition-pruning-in-explain\">Partition Pruning in EXPLAIN\u003C\u002Fh2>\n\u003Cp>Verify that the query planner prunes partitions:\u003C\u002Fp>\n\u003Cpre>\u003Ccode class=\"language-sql\">EXPLAIN (ANALYZE, BUFFERS)\nSELECT * FROM transactions\nWHERE block_number BETWEEN 18000000 AND 18500000;\n\u003C\u002Fcode>\u003C\u002Fpre>\n\u003Cp>Good output (only relevant partitions scanned):\u003C\u002Fp>\n\u003Cpre>\u003Ccode>Append (cost=0.43..1234.56 rows=50000 width=120)\n  -&gt; Index Scan using transactions_p18_block_number_idx\n     on transactions_p18 (actual time=0.1..12.3 rows=50000)\n     Index Cond: block_number &gt;= 18000000 AND block_number &lt;= 18500000\n     Buffers: shared hit=423\nPlanning Time: 0.5ms\nExecution Time: 15.2ms\n\u003C\u002Fcode>\u003C\u002Fpre>\n\u003Cp>Bad output (all partitions scanned):\u003C\u002Fp>\n\u003Cpre>\u003Ccode>Append (cost=0.00..999999.99 rows=34000000 width=120)\n  -&gt; Seq Scan on transactions_p0 ...\n  -&gt; Seq Scan on transactions_p1 ...\n  -&gt; Seq Scan on transactions_p2 ...\n  ... (all 20 partitions)\n\u003C\u002Fcode>\u003C\u002Fpre>\n\u003Cp>If you see all partitions being scanned, the WHERE clause does not match the partition key. Fix the query or add the partition key to the filter.\u003C\u002Fp>\n\u003Ch2 id=\"automating-partition-creation\">Automating Partition Creation\u003C\u002Fh2>\n\u003Cp>For time-series or block-number-based partitions, automate creation with a cron job or PostgreSQL function:\u003C\u002Fp>\n\u003Cpre>\u003Ccode class=\"language-sql\">CREATE OR REPLACE FUNCTION create_next_partition()\nRETURNS void AS $$\nDECLARE\n    max_block BIGINT;\n    next_start BIGINT;\n    next_end BIGINT;\n    partition_name TEXT;\nBEGIN\n    SELECT MAX(block_number) INTO max_block FROM transactions;\n    next_start := (max_block \u002F 1000000 + 1) * 1000000;\n    next_end := next_start + 1000000;\n    partition_name := format('transactions_p%s', next_start \u002F 1000000);\n\n    EXECUTE format(\n        'CREATE TABLE IF NOT EXISTS %I PARTITION OF transactions\n         FOR VALUES FROM (%s) TO (%s)',\n        partition_name, next_start, next_end\n    );\n\n    RAISE NOTICE 'Created partition % for blocks % to %',\n        partition_name, next_start, next_end;\nEND;\n$$ LANGUAGE plpgsql;\n\u003C\u002Fcode>\u003C\u002Fpre>\n\u003Ch2 id=\"performance-comparison\">Performance Comparison\u003C\u002Fh2>\n\u003Ctable>\u003Cthead>\u003Ctr>\u003Cth>Query\u003C\u002Fth>\u003Cth>Unpartitioned (34M)\u003C\u002Fth>\u003Cth>Partitioned (20 x 1.7M)\u003C\u002Fth>\u003Cth>Speedup\u003C\u002Fth>\u003C\u002Ftr>\u003C\u002Fthead>\u003Ctbody>\n\u003Ctr>\u003Ctd>Point lookup by block\u003C\u002Ftd>\u003Ctd>230ms\u003C\u002Ftd>\u003Ctd>12ms\u003C\u002Ftd>\u003Ctd>19x\u003C\u002Ftd>\u003C\u002Ftr>\n\u003Ctr>\u003Ctd>Range scan (500K blocks)\u003C\u002Ftd>\u003Ctd>4.8s\u003C\u002Ftd>\u003Ctd>180ms\u003C\u002Ftd>\u003Ctd>27x\u003C\u002Ftd>\u003C\u002Ftr>\n\u003Ctr>\u003Ctd>COUNT(*) full table\u003C\u002Ftd>\u003Ctd>45s\u003C\u002Ftd>\u003Ctd>45s\u003C\u002Ftd>\u003Ctd>1x\u003C\u002Ftd>\u003C\u002Ftr>\n\u003Ctr>\u003Ctd>VACUUM\u003C\u002Ftd>\u003Ctd>2.1 hours\u003C\u002Ftd>\u003Ctd>6.3 min\u002Fpartition\u003C\u002Ftd>\u003Ctd>Parallel\u003C\u002Ftd>\u003C\u002Ftr>\n\u003Ctr>\u003Ctd>Index rebuild\u003C\u002Ftd>\u003Ctd>12 min (locks table)\u003C\u002Ftd>\u003Ctd>36s\u002Fpartition\u003C\u002Ftd>\u003Ctd>No lock\u003C\u002Ftd>\u003C\u002Ftr>\n\u003C\u002Ftbody>\u003C\u002Ftable>\n\u003Cp>Partitioning dramatically improves queries that filter on the partition key. Full-table scans see no improvement (all partitions are scanned). The biggest operational win is VACUUM and index maintenance, which can now run on individual partitions without affecting the others.\u003C\u002Fp>\n\u003Ch2 id=\"common-pitfalls\">Common Pitfalls\u003C\u002Fh2>\n\u003Ch3>1. Missing Partition Key in WHERE Clause\u003C\u002Fh3>\n\u003Cp>If your query does not filter on the partition key, PostgreSQL scans all partitions. Always include the partition key in WHERE clauses.\u003C\u002Fp>\n\u003Ch3>2. Too Many Partitions\u003C\u002Fh3>\n\u003Cp>Each partition has overhead (file descriptors, planner time). More than 100 partitions can slow down query planning. Aim for partitions of 1-10M rows each.\u003C\u002Fp>\n\u003Ch3>3. Forgetting the Default Partition\u003C\u002Fh3>\n\u003Cp>Without a default partition, inserts with partition key values outside defined ranges will fail with an error. Always create a default partition as a safety net.\u003C\u002Fp>\n\u003Ch3>4. Cross-Partition Foreign Keys\u003C\u002Fh3>\n\u003Cp>Partitioned tables cannot be referenced by foreign keys in PostgreSQL. If other tables reference your partitioned table, you need application-level referential integrity.\u003C\u002Fp>\n\u003Ch2 id=\"conclusion\">Conclusion\u003C\u002Fh2>\n\u003Cp>PostgreSQL table partitioning is a powerful tool for managing large tables. Range partitioning is ideal for time-series and block-based data, list partitioning for categorical splits, and hash partitioning for even distribution. Start partitioning when your table exceeds 10M rows and queries consistently scan large portions. The key to success: choose a partition key that matches your most common query patterns, keep partition counts reasonable (10-50), and always verify partition pruning with EXPLAIN ANALYZE.\u003C\u002Fp>\n","en","b0000000-0000-0000-0000-000000000001",true,"2026-03-28T10:44:23.170642Z","PostgreSQL Table Partitioning — When Your Table Hits 10M+ Rows","Practical guide to PostgreSQL table partitioning with range, list, and hash strategies. Real example migrating 34M rows to 20 partitions.","postgresql table partitioning",null,"index, follow",[22,27,31],{"id":23,"name":24,"slug":25,"created_at":26},"c0000000-0000-0000-0000-000000000012","DevOps","devops","2026-03-28T10:44:21.513630Z",{"id":28,"name":29,"slug":30,"created_at":26},"c0000000-0000-0000-0000-000000000022","Performance","performance",{"id":32,"name":33,"slug":34,"created_at":26},"c0000000-0000-0000-0000-000000000005","PostgreSQL","postgresql",[36,43,49],{"id":37,"title":38,"slug":39,"excerpt":40,"locale":12,"category_name":41,"published_at":42},"d0200000-0000-0000-0000-000000000003","Why Bali Is Becoming Southeast Asia's Impact-Tech Hub in 2026","why-bali-becoming-southeast-asia-impact-tech-hub-2026","Bali ranks #16 among Southeast Asian startup ecosystems. With a growing concentration of Web3 builders, AI sustainability startups, and eco-travel tech companies, the island is carving a niche as the region's impact-tech capital.","Engineering","2026-03-28T10:44:37.748283Z",{"id":44,"title":45,"slug":46,"excerpt":47,"locale":12,"category_name":41,"published_at":48},"d0200000-0000-0000-0000-000000000002","ASEAN Data Protection Patchwork: A Developer's Compliance Checklist","asean-data-protection-patchwork-developer-compliance-checklist","Seven ASEAN countries now have comprehensive data protection laws, each with different consent models, localization requirements, and penalty structures. Here is a practical compliance checklist for developers building multi-country applications.","2026-03-28T10:44:37.374741Z",{"id":50,"title":51,"slug":52,"excerpt":53,"locale":12,"category_name":41,"published_at":54},"d0200000-0000-0000-0000-000000000001","Indonesia's $29 Billion Digital Transformation: Opportunities for Software Companies","indonesia-29-billion-digital-transformation-opportunities-software-companies","Indonesia's IT services market is projected to reach $29.03 billion in 2026, up from $24.37 billion in 2025. Cloud infrastructure, AI, e-commerce, and data centers are driving the fastest growth in Southeast Asia.","2026-03-28T10:44:37.349311Z",{"id":13,"name":56,"slug":57,"bio":58,"photo_url":19,"linkedin":19,"role":59,"created_at":60,"updated_at":60},"Open Soft Team","open-soft-team","The engineering team at Open Soft, building premium software solutions from Bali, Indonesia.","Engineering Team","2026-03-28T08:31:22.226811Z"]