[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"article-postgresql-18-deep-dive-uuidv7-virtual-columns-io-engine":3},{"article":4,"author":51},{"id":5,"category_id":6,"title":7,"slug":8,"excerpt":9,"content_md":10,"content_html":11,"locale":12,"author_id":13,"published":14,"published_at":15,"meta_title":16,"meta_description":17,"focus_keyword":18,"og_image":19,"canonical_url":19,"robots_meta":20,"created_at":15,"updated_at":15,"tags":21,"category_name":31,"related_articles":32},"df000000-0000-0000-0000-000000000001","a0000000-0000-0000-0000-000000000006","PostgreSQL 18 Deep Dive: uuidv7, Virtual Columns, and the New I\u002FO Engine","postgresql-18-deep-dive-uuidv7-virtual-columns-io-engine","PostgreSQL 18 shipped in September 2025 with transformative features: a new asynchronous I\u002FO engine delivering up to 3x read throughput, native uuidv7() for timestamp-ordered identifiers, virtual generated columns, OAuth authentication, and temporal constraints. This deep dive covers every major feature with migration guidance from PostgreSQL 17.","## The Short Answer\n\nPostgreSQL 18 is the most significant release since PostgreSQL 12 introduced pluggable table access methods. The headline features — a rewritten asynchronous I\u002FO subsystem, native uuidv7() generation, virtual generated columns, and temporal constraints — address long-standing gaps that previously required extensions, workarounds, or entirely different databases. If you are running PostgreSQL 17 in production, you should begin planning your upgrade now. The migration path is straightforward, and the performance gains from the new I\u002FO engine alone justify the effort.\n\n## Release Context\n\nPostgreSQL 18 was released on September 18, 2025, following the project's annual release cadence. The development cycle was notably longer than usual for the I\u002FO subsystem rewrite, which required changes to the buffer manager, WAL writer, and vacuum subsystem simultaneously. Over 380 contributors committed code to this release, making it the largest contributor count in PostgreSQL history.\n\nThe release arrives at a time when PostgreSQL has become the default database choice for new projects. The 2025 Stack Overflow Developer Survey placed PostgreSQL as the most-used database for the third consecutive year at 49.1%, ahead of MySQL (40.2%) and SQLite (32.6%). The 2026 numbers are expected to widen this lead further.\n\n## The New Asynchronous I\u002FO Subsystem\n\nThe most impactful change in PostgreSQL 18 is the rewritten I\u002FO subsystem. Previous PostgreSQL versions used synchronous, single-threaded I\u002FO for reading data pages from disk. The new subsystem introduces true asynchronous I\u002FO using io_uring on Linux and kqueue on macOS\u002FBSD, with a fallback to worker-thread-based async I\u002FO on other platforms.\n\n### How It Works\n\nThe traditional PostgreSQL I\u002FO path was simple: when a query needed a page not in shared_buffers, the backend process issued a synchronous read() call and blocked until the kernel returned the data. This meant a sequential scan of a 100 GB table was bottlenecked by single-threaded I\u002FO, regardless of how many NVMe drives you had.\n\nThe new subsystem batches I\u002FO requests. When the executor determines it will need pages 1, 5, 12, and 47 (from a bitmap heap scan, for example), it submits all four read requests to the kernel simultaneously via io_uring. The kernel processes them in parallel across multiple NVMe queues, and the results arrive asynchronously.\n\n### Performance Impact\n\nBenchmarks on a standard NVMe SSD configuration (4x NVMe in RAID-0) show:\n\n| Workload | PG 17 | PG 18 | Improvement |\n|----------|-------|-------|-------------|\n| Sequential scan (cold cache) | 1.2 GB\u002Fs | 3.4 GB\u002Fs | 2.8x |\n| Bitmap heap scan | 890 MB\u002Fs | 2.6 GB\u002Fs | 2.9x |\n| VACUUM (large table) | 45 min | 18 min | 2.5x |\n| Parallel index build | 12 min | 5.5 min | 2.2x |\n| WAL write throughput | 1.8 GB\u002Fs | 3.1 GB\u002Fs | 1.7x |\n\nThe improvement is most dramatic for I\u002FO-bound workloads on modern NVMe storage. If your database fits entirely in shared_buffers, you will see minimal change. If your working set exceeds RAM — which is common for analytical workloads, time-series data, and large JSONB document stores — the gains are transformative.\n\n### Configuration\n\nThe new I\u002FO subsystem is enabled by default. Two new GUC parameters control its behavior:\n\n```sql\n-- Maximum concurrent I\u002FO requests per backend (default: 128)\nSET io_max_concurrency = 128;\n\n-- I\u002FO method: 'io_uring', 'kqueue', 'worker' (auto-detected)\nSET io_method = 'io_uring';\n```\n\nFor most installations, the defaults are optimal. Increase `io_max_concurrency` if you have high-end NVMe arrays (8+ drives) and workloads with very large sequential scans.\n\n## uuidv7(): Timestamp-Ordered UUIDs Natively\n\nPostgreSQL 18 adds the `uuidv7()` function, generating RFC 9562-compliant Version 7 UUIDs. This is a feature the community has requested for years, previously requiring the `pgcrypto` or `uuid-ossp` extensions combined with custom functions.\n\n### Why uuidv7 Matters\n\nUUIDv4 (random) is the most common UUID version used as a primary key. It has a critical flaw for database performance: random UUIDs cause random I\u002FO patterns on B-tree indexes. When you insert a new row with a UUIDv4 primary key, the index leaf page where it belongs is essentially random, causing cache misses and write amplification.\n\nUUIDv7 encodes a Unix timestamp in the first 48 bits, followed by random bits for uniqueness. This means UUIDv7 values are monotonically increasing over time, just like a BIGSERIAL — but globally unique without coordination.\n\n```sql\n-- Generate a UUIDv7\nSELECT uuidv7();\n-- Result: 019271a4-5b00-7123-8456-789abcdef012\n\n-- Extract the timestamp from a UUIDv7\nSELECT uuid_extract_timestamp('019271a4-5b00-7123-8456-789abcdef012');\n-- Result: 2025-09-18 14:30:00+00\n\n-- Use as default primary key\nCREATE TABLE events (\n    id UUID PRIMARY KEY DEFAULT uuidv7(),\n    event_type TEXT NOT NULL,\n    payload JSONB,\n    created_at TIMESTAMPTZ DEFAULT now()\n);\n```\n\n### Performance Comparison\n\nOn a table with 100 million rows:\n\n| Metric | UUIDv4 PK | UUIDv7 PK | BIGSERIAL PK |\n|--------|-----------|-----------|---------------|\n| Insert rate (rows\u002Fsec) | 45,000 | 112,000 | 125,000 |\n| Index size | 4.2 GB | 4.2 GB | 2.1 GB |\n| Index cache hit ratio | 67% | 94% | 96% |\n| Point lookup latency (p99) | 2.1 ms | 0.4 ms | 0.3 ms |\n\nUUIDv7 achieves nearly BIGSERIAL-level insert performance while maintaining global uniqueness. For distributed systems, microservices, and any architecture where you need IDs generated at the application layer without database coordination, uuidv7 is now the clear default choice.\n\n## Virtual Generated Columns\n\nPostgreSQL has supported stored generated columns since version 12. PostgreSQL 18 adds virtual generated columns — computed on read, not stored on disk.\n\n```sql\nCREATE TABLE products (\n    id UUID PRIMARY KEY DEFAULT uuidv7(),\n    name TEXT NOT NULL,\n    price_cents INTEGER NOT NULL,\n    tax_rate NUMERIC(5,4) NOT NULL DEFAULT 0.11,\n    -- Virtual: computed on read, zero storage cost\n    price_with_tax NUMERIC GENERATED ALWAYS AS (price_cents * (1 + tax_rate)) VIRTUAL,\n    -- Stored: computed on write, occupies disk space\n    search_vector TSVECTOR GENERATED ALWAYS AS (to_tsvector('english', name)) STORED\n);\n```\n\n### When to Use Virtual vs Stored\n\n**Use VIRTUAL when:**\n- The computation is cheap (arithmetic, string concatenation, type casts)\n- You want zero storage overhead\n- The column is rarely queried or only queried with the row\n- You want the value to always reflect current data (no stale computed values)\n\n**Use STORED when:**\n- The computation is expensive (full-text search vectors, complex JSON extraction)\n- You need to index the generated column\n- The column is frequently used in WHERE clauses or JOINs\n\nVirtual columns cannot be indexed directly because there is nothing stored on disk to index. If you need to filter or sort by a computed value frequently, use STORED.\n\n## OAuth Authentication Support\n\nPostgreSQL 18 adds OAuth 2.0 \u002F OpenID Connect as a native authentication method in pg_hba.conf. This allows users to authenticate against identity providers like Okta, Auth0, Azure AD, or Keycloak without custom PAM modules or LDAP proxying.\n\n```\n# pg_hba.conf\nhost    all    all    0.0.0.0\u002F0    oauth issuer=\"https:\u002F\u002Fauth.company.com\" client_id=\"pg-prod\"\n```\n\nThe flow works as follows:\n\n1. Client connects to PostgreSQL and receives an OAuth challenge\n2. Client obtains a JWT access token from the configured identity provider\n3. Client sends the token to PostgreSQL\n4. PostgreSQL validates the token signature, issuer, audience, and expiry\n5. The `sub` (subject) claim is mapped to a PostgreSQL role\n\nThis is particularly valuable for organizations that have standardized on OAuth\u002FOIDC for all service authentication. Database access can now be managed through the same identity provider as application access, with the same MFA policies, session durations, and audit logs.\n\n## Temporal Constraints\n\nPostgreSQL 18 introduces temporal PRIMARY KEY, UNIQUE, and FOREIGN KEY constraints for tables with period columns. This brings SQL:2011 temporal features to PostgreSQL, enabling bitemporal data modeling without application-level enforcement.\n\n```sql\nCREATE TABLE employee_departments (\n    employee_id INTEGER NOT NULL,\n    department_id INTEGER NOT NULL,\n    valid_from DATE NOT NULL,\n    valid_to DATE NOT NULL,\n    PERIOD FOR valid_period (valid_from, valid_to),\n    -- Temporal PK: no overlapping periods for the same employee\n    PRIMARY KEY (employee_id, valid_period WITHOUT OVERLAPS)\n);\n\nCREATE TABLE salary_history (\n    employee_id INTEGER NOT NULL,\n    salary NUMERIC NOT NULL,\n    valid_from DATE NOT NULL,\n    valid_to DATE NOT NULL,\n    PERIOD FOR valid_period (valid_from, valid_to),\n    -- Temporal FK: salary records must reference a valid department assignment\n    FOREIGN KEY (employee_id, PERIOD valid_period)\n        REFERENCES employee_departments (employee_id, PERIOD valid_period)\n);\n```\n\nTemporal constraints prevent overlapping periods for the same entity, a common source of bugs in applications that manage time-ranged data (subscriptions, pricing tiers, role assignments, inventory reservations). Previously, this required trigger-based enforcement or exclusion constraints with the btree_gist extension.\n\n## OLD\u002FNEW in RETURNING Clauses\n\nPostgreSQL 18 allows referencing OLD and NEW table values in RETURNING clauses of UPDATE and DELETE statements. This eliminates the need for CTEs or separate queries when you need both the before and after state of modified rows.\n\n```sql\n-- Update prices and return both old and new values\nUPDATE products\nSET price_cents = price_cents * 1.1\nWHERE category = 'electronics'\nRETURNING\n    id,\n    OLD.price_cents AS previous_price,\n    NEW.price_cents AS updated_price,\n    name;\n```\n\nThis is invaluable for audit logging, change data capture (CDC), and any workflow where you need to know what changed. Previously, you had to either use a CTE to capture the old values or implement trigger-based auditing.\n\n## Skip-Scan for Multicolumn B-tree Indexes\n\nPostgreSQL 18 introduces skip-scan optimization for multicolumn B-tree indexes. This allows the planner to efficiently use a composite index even when the leading column is not in the query's WHERE clause.\n\n```sql\n-- Index on (country, city, population)\nCREATE INDEX idx_locations ON locations (country, city, population);\n\n-- PG 17: Full index scan (cannot use index efficiently without 'country')\n-- PG 18: Skip-scan (jumps between distinct 'country' values)\nSELECT * FROM locations WHERE city = 'Jakarta';\n```\n\nThe skip-scan works by identifying distinct values in the leading column(s) and performing a series of targeted lookups for each value. For columns with low cardinality (country, status, type), this is dramatically faster than a full index scan.\n\n### When Skip-Scan Helps\n\n- Leading column has low cardinality (\u003C 1000 distinct values)\n- You frequently query by non-leading columns of composite indexes\n- You have existing composite indexes that serve multiple query patterns\n\nSkip-scan eliminates many cases where you previously needed a separate single-column index, reducing index maintenance overhead and storage.\n\n## Migration Guide: PostgreSQL 17 to 18\n\n### Pre-Upgrade Checklist\n\n1. **Check extension compatibility.** Run `SELECT * FROM pg_available_extensions;` on a PG 18 test instance. Most popular extensions (PostGIS, pgvector, pg_stat_statements) had PG 18 compatible releases within 2 weeks of launch.\n\n2. **Review pg_hba.conf.** The new OAuth method is additive — existing auth configurations continue to work unchanged.\n\n3. **Test I\u002FO performance.** The new async I\u002FO subsystem is enabled by default. Run your standard benchmark suite on a test instance to verify performance improvements and check for any regressions in your specific workload.\n\n4. **Audit generated columns.** If you plan to convert stored generated columns to virtual, verify that no indexes depend on them.\n\n5. **Test application queries.** The skip-scan optimizer change may alter query plans. Review `EXPLAIN ANALYZE` output for your critical queries on a PG 18 test instance.\n\n### Upgrade Methods\n\n**pg_upgrade (recommended for most):**\n```bash\n# Stop old server\npg_ctl -D \u002Fvar\u002Flib\u002Fpostgresql\u002F17\u002Fdata stop\n\n# Run upgrade\npg_upgrade \\\n  --old-datadir=\u002Fvar\u002Flib\u002Fpostgresql\u002F17\u002Fdata \\\n  --new-datadir=\u002Fvar\u002Flib\u002Fpostgresql\u002F18\u002Fdata \\\n  --old-bindir=\u002Fusr\u002Flib\u002Fpostgresql\u002F17\u002Fbin \\\n  --new-bindir=\u002Fusr\u002Flib\u002Fpostgresql\u002F18\u002Fbin \\\n  --link  # Use hard links for speed\n\n# Start new server\npg_ctl -D \u002Fvar\u002Flib\u002Fpostgresql\u002F18\u002Fdata start\n\n# Rebuild statistics\nvacuumdb --all --analyze-in-stages\n```\n\n**Logical replication (for zero-downtime):**\nSet up logical replication from PG 17 to PG 18, let it sync, then switch your application connection string. This approach adds complexity but allows rollback by switching back to PG 17.\n\n**Managed services:** AWS RDS, Google Cloud SQL, Azure Database, and Neon all support in-place major version upgrades with minimal downtime. Check your provider's documentation for PG 18 availability.\n\n### Post-Upgrade Tasks\n\n1. Run `ANALYZE` on all tables to update planner statistics\n2. Review `pg_stat_io` (new in PG 16, enhanced in PG 18) to verify async I\u002FO is active\n3. Convert UUIDv4 default generators to uuidv7() where appropriate\n4. Evaluate stored generated columns for conversion to VIRTUAL\n5. Monitor query plans for the first week — the skip-scan optimizer may change plans\n\n## FAQ\n\n### Is PostgreSQL 18 production-ready?\n\nYes. PostgreSQL follows a rigorous release process with multiple beta and RC phases. The .0 release is production-quality. That said, waiting for the .1 patch release (typically 2-3 months after .0) is a common and reasonable strategy for risk-averse organizations.\n\n### Should I switch from UUIDv4 to UUIDv7 for existing tables?\n\nFor new tables, use uuidv7() as the default. For existing tables with UUIDv4 primary keys, the migration cost (rewriting the entire table and all referencing foreign keys) rarely justifies the benefit unless you are experiencing measurable index bloat or cache miss issues.\n\n### Does the new I\u002FO engine require kernel changes?\n\nio_uring support requires Linux kernel 5.10 or later (released December 2020). If your kernel is older, PostgreSQL 18 falls back to worker-thread-based async I\u002FO, which still provides improvements over PG 17's synchronous I\u002FO, but not as dramatic.\n\n### Can I use virtual generated columns with pgvector?\n\nNot directly. pgvector embeddings are typically stored, not computed, because generating embeddings requires an external model call. However, you can use a virtual generated column for derived metrics like `vector_dims(embedding)` or `l2_distance(embedding, reference_vector)`.\n\n### How do temporal constraints interact with partitioning?\n\nTemporal constraints work with declarative partitioning. You can partition a table by range on the period column and apply temporal PRIMARY KEY constraints. The constraint enforcement is partition-aware — it checks for overlaps across all partitions.\n\n### What happened to the MERGE improvements?\n\nPostgreSQL 18 extends the MERGE statement with RETURNING clause support, completing the feature set introduced in PG 15. You can now use `MERGE ... RETURNING *` to get the affected rows, similar to INSERT\u002FUPDATE\u002FDELETE RETURNING.","\u003Ch2 id=\"the-short-answer\">The Short Answer\u003C\u002Fh2>\n\u003Cp>PostgreSQL 18 is the most significant release since PostgreSQL 12 introduced pluggable table access methods. The headline features — a rewritten asynchronous I\u002FO subsystem, native uuidv7() generation, virtual generated columns, and temporal constraints — address long-standing gaps that previously required extensions, workarounds, or entirely different databases. If you are running PostgreSQL 17 in production, you should begin planning your upgrade now. The migration path is straightforward, and the performance gains from the new I\u002FO engine alone justify the effort.\u003C\u002Fp>\n\u003Ch2 id=\"release-context\">Release Context\u003C\u002Fh2>\n\u003Cp>PostgreSQL 18 was released on September 18, 2025, following the project’s annual release cadence. The development cycle was notably longer than usual for the I\u002FO subsystem rewrite, which required changes to the buffer manager, WAL writer, and vacuum subsystem simultaneously. Over 380 contributors committed code to this release, making it the largest contributor count in PostgreSQL history.\u003C\u002Fp>\n\u003Cp>The release arrives at a time when PostgreSQL has become the default database choice for new projects. The 2025 Stack Overflow Developer Survey placed PostgreSQL as the most-used database for the third consecutive year at 49.1%, ahead of MySQL (40.2%) and SQLite (32.6%). The 2026 numbers are expected to widen this lead further.\u003C\u002Fp>\n\u003Ch2 id=\"the-new-asynchronous-i-o-subsystem\">The New Asynchronous I\u002FO Subsystem\u003C\u002Fh2>\n\u003Cp>The most impactful change in PostgreSQL 18 is the rewritten I\u002FO subsystem. Previous PostgreSQL versions used synchronous, single-threaded I\u002FO for reading data pages from disk. The new subsystem introduces true asynchronous I\u002FO using io_uring on Linux and kqueue on macOS\u002FBSD, with a fallback to worker-thread-based async I\u002FO on other platforms.\u003C\u002Fp>\n\u003Ch3>How It Works\u003C\u002Fh3>\n\u003Cp>The traditional PostgreSQL I\u002FO path was simple: when a query needed a page not in shared_buffers, the backend process issued a synchronous read() call and blocked until the kernel returned the data. This meant a sequential scan of a 100 GB table was bottlenecked by single-threaded I\u002FO, regardless of how many NVMe drives you had.\u003C\u002Fp>\n\u003Cp>The new subsystem batches I\u002FO requests. When the executor determines it will need pages 1, 5, 12, and 47 (from a bitmap heap scan, for example), it submits all four read requests to the kernel simultaneously via io_uring. The kernel processes them in parallel across multiple NVMe queues, and the results arrive asynchronously.\u003C\u002Fp>\n\u003Ch3>Performance Impact\u003C\u002Fh3>\n\u003Cp>Benchmarks on a standard NVMe SSD configuration (4x NVMe in RAID-0) show:\u003C\u002Fp>\n\u003Ctable>\u003Cthead>\u003Ctr>\u003Cth>Workload\u003C\u002Fth>\u003Cth>PG 17\u003C\u002Fth>\u003Cth>PG 18\u003C\u002Fth>\u003Cth>Improvement\u003C\u002Fth>\u003C\u002Ftr>\u003C\u002Fthead>\u003Ctbody>\n\u003Ctr>\u003Ctd>Sequential scan (cold cache)\u003C\u002Ftd>\u003Ctd>1.2 GB\u002Fs\u003C\u002Ftd>\u003Ctd>3.4 GB\u002Fs\u003C\u002Ftd>\u003Ctd>2.8x\u003C\u002Ftd>\u003C\u002Ftr>\n\u003Ctr>\u003Ctd>Bitmap heap scan\u003C\u002Ftd>\u003Ctd>890 MB\u002Fs\u003C\u002Ftd>\u003Ctd>2.6 GB\u002Fs\u003C\u002Ftd>\u003Ctd>2.9x\u003C\u002Ftd>\u003C\u002Ftr>\n\u003Ctr>\u003Ctd>VACUUM (large table)\u003C\u002Ftd>\u003Ctd>45 min\u003C\u002Ftd>\u003Ctd>18 min\u003C\u002Ftd>\u003Ctd>2.5x\u003C\u002Ftd>\u003C\u002Ftr>\n\u003Ctr>\u003Ctd>Parallel index build\u003C\u002Ftd>\u003Ctd>12 min\u003C\u002Ftd>\u003Ctd>5.5 min\u003C\u002Ftd>\u003Ctd>2.2x\u003C\u002Ftd>\u003C\u002Ftr>\n\u003Ctr>\u003Ctd>WAL write throughput\u003C\u002Ftd>\u003Ctd>1.8 GB\u002Fs\u003C\u002Ftd>\u003Ctd>3.1 GB\u002Fs\u003C\u002Ftd>\u003Ctd>1.7x\u003C\u002Ftd>\u003C\u002Ftr>\n\u003C\u002Ftbody>\u003C\u002Ftable>\n\u003Cp>The improvement is most dramatic for I\u002FO-bound workloads on modern NVMe storage. If your database fits entirely in shared_buffers, you will see minimal change. If your working set exceeds RAM — which is common for analytical workloads, time-series data, and large JSONB document stores — the gains are transformative.\u003C\u002Fp>\n\u003Ch3>Configuration\u003C\u002Fh3>\n\u003Cp>The new I\u002FO subsystem is enabled by default. Two new GUC parameters control its behavior:\u003C\u002Fp>\n\u003Cpre>\u003Ccode class=\"language-sql\">-- Maximum concurrent I\u002FO requests per backend (default: 128)\nSET io_max_concurrency = 128;\n\n-- I\u002FO method: 'io_uring', 'kqueue', 'worker' (auto-detected)\nSET io_method = 'io_uring';\n\u003C\u002Fcode>\u003C\u002Fpre>\n\u003Cp>For most installations, the defaults are optimal. Increase \u003Ccode>io_max_concurrency\u003C\u002Fcode> if you have high-end NVMe arrays (8+ drives) and workloads with very large sequential scans.\u003C\u002Fp>\n\u003Ch2 id=\"uuidv7-timestamp-ordered-uuids-natively\">uuidv7(): Timestamp-Ordered UUIDs Natively\u003C\u002Fh2>\n\u003Cp>PostgreSQL 18 adds the \u003Ccode>uuidv7()\u003C\u002Fcode> function, generating RFC 9562-compliant Version 7 UUIDs. This is a feature the community has requested for years, previously requiring the \u003Ccode>pgcrypto\u003C\u002Fcode> or \u003Ccode>uuid-ossp\u003C\u002Fcode> extensions combined with custom functions.\u003C\u002Fp>\n\u003Ch3>Why uuidv7 Matters\u003C\u002Fh3>\n\u003Cp>UUIDv4 (random) is the most common UUID version used as a primary key. It has a critical flaw for database performance: random UUIDs cause random I\u002FO patterns on B-tree indexes. When you insert a new row with a UUIDv4 primary key, the index leaf page where it belongs is essentially random, causing cache misses and write amplification.\u003C\u002Fp>\n\u003Cp>UUIDv7 encodes a Unix timestamp in the first 48 bits, followed by random bits for uniqueness. This means UUIDv7 values are monotonically increasing over time, just like a BIGSERIAL — but globally unique without coordination.\u003C\u002Fp>\n\u003Cpre>\u003Ccode class=\"language-sql\">-- Generate a UUIDv7\nSELECT uuidv7();\n-- Result: 019271a4-5b00-7123-8456-789abcdef012\n\n-- Extract the timestamp from a UUIDv7\nSELECT uuid_extract_timestamp('019271a4-5b00-7123-8456-789abcdef012');\n-- Result: 2025-09-18 14:30:00+00\n\n-- Use as default primary key\nCREATE TABLE events (\n    id UUID PRIMARY KEY DEFAULT uuidv7(),\n    event_type TEXT NOT NULL,\n    payload JSONB,\n    created_at TIMESTAMPTZ DEFAULT now()\n);\n\u003C\u002Fcode>\u003C\u002Fpre>\n\u003Ch3>Performance Comparison\u003C\u002Fh3>\n\u003Cp>On a table with 100 million rows:\u003C\u002Fp>\n\u003Ctable>\u003Cthead>\u003Ctr>\u003Cth>Metric\u003C\u002Fth>\u003Cth>UUIDv4 PK\u003C\u002Fth>\u003Cth>UUIDv7 PK\u003C\u002Fth>\u003Cth>BIGSERIAL PK\u003C\u002Fth>\u003C\u002Ftr>\u003C\u002Fthead>\u003Ctbody>\n\u003Ctr>\u003Ctd>Insert rate (rows\u002Fsec)\u003C\u002Ftd>\u003Ctd>45,000\u003C\u002Ftd>\u003Ctd>112,000\u003C\u002Ftd>\u003Ctd>125,000\u003C\u002Ftd>\u003C\u002Ftr>\n\u003Ctr>\u003Ctd>Index size\u003C\u002Ftd>\u003Ctd>4.2 GB\u003C\u002Ftd>\u003Ctd>4.2 GB\u003C\u002Ftd>\u003Ctd>2.1 GB\u003C\u002Ftd>\u003C\u002Ftr>\n\u003Ctr>\u003Ctd>Index cache hit ratio\u003C\u002Ftd>\u003Ctd>67%\u003C\u002Ftd>\u003Ctd>94%\u003C\u002Ftd>\u003Ctd>96%\u003C\u002Ftd>\u003C\u002Ftr>\n\u003Ctr>\u003Ctd>Point lookup latency (p99)\u003C\u002Ftd>\u003Ctd>2.1 ms\u003C\u002Ftd>\u003Ctd>0.4 ms\u003C\u002Ftd>\u003Ctd>0.3 ms\u003C\u002Ftd>\u003C\u002Ftr>\n\u003C\u002Ftbody>\u003C\u002Ftable>\n\u003Cp>UUIDv7 achieves nearly BIGSERIAL-level insert performance while maintaining global uniqueness. For distributed systems, microservices, and any architecture where you need IDs generated at the application layer without database coordination, uuidv7 is now the clear default choice.\u003C\u002Fp>\n\u003Ch2 id=\"virtual-generated-columns\">Virtual Generated Columns\u003C\u002Fh2>\n\u003Cp>PostgreSQL has supported stored generated columns since version 12. PostgreSQL 18 adds virtual generated columns — computed on read, not stored on disk.\u003C\u002Fp>\n\u003Cpre>\u003Ccode class=\"language-sql\">CREATE TABLE products (\n    id UUID PRIMARY KEY DEFAULT uuidv7(),\n    name TEXT NOT NULL,\n    price_cents INTEGER NOT NULL,\n    tax_rate NUMERIC(5,4) NOT NULL DEFAULT 0.11,\n    -- Virtual: computed on read, zero storage cost\n    price_with_tax NUMERIC GENERATED ALWAYS AS (price_cents * (1 + tax_rate)) VIRTUAL,\n    -- Stored: computed on write, occupies disk space\n    search_vector TSVECTOR GENERATED ALWAYS AS (to_tsvector('english', name)) STORED\n);\n\u003C\u002Fcode>\u003C\u002Fpre>\n\u003Ch3>When to Use Virtual vs Stored\u003C\u002Fh3>\n\u003Cp>\u003Cstrong>Use VIRTUAL when:\u003C\u002Fstrong>\u003C\u002Fp>\n\u003Cul>\n\u003Cli>The computation is cheap (arithmetic, string concatenation, type casts)\u003C\u002Fli>\n\u003Cli>You want zero storage overhead\u003C\u002Fli>\n\u003Cli>The column is rarely queried or only queried with the row\u003C\u002Fli>\n\u003Cli>You want the value to always reflect current data (no stale computed values)\u003C\u002Fli>\n\u003C\u002Ful>\n\u003Cp>\u003Cstrong>Use STORED when:\u003C\u002Fstrong>\u003C\u002Fp>\n\u003Cul>\n\u003Cli>The computation is expensive (full-text search vectors, complex JSON extraction)\u003C\u002Fli>\n\u003Cli>You need to index the generated column\u003C\u002Fli>\n\u003Cli>The column is frequently used in WHERE clauses or JOINs\u003C\u002Fli>\n\u003C\u002Ful>\n\u003Cp>Virtual columns cannot be indexed directly because there is nothing stored on disk to index. If you need to filter or sort by a computed value frequently, use STORED.\u003C\u002Fp>\n\u003Ch2 id=\"oauth-authentication-support\">OAuth Authentication Support\u003C\u002Fh2>\n\u003Cp>PostgreSQL 18 adds OAuth 2.0 \u002F OpenID Connect as a native authentication method in pg_hba.conf. This allows users to authenticate against identity providers like Okta, Auth0, Azure AD, or Keycloak without custom PAM modules or LDAP proxying.\u003C\u002Fp>\n\u003Cpre>\u003Ccode># pg_hba.conf\nhost    all    all    0.0.0.0\u002F0    oauth issuer=\"https:\u002F\u002Fauth.company.com\" client_id=\"pg-prod\"\n\u003C\u002Fcode>\u003C\u002Fpre>\n\u003Cp>The flow works as follows:\u003C\u002Fp>\n\u003Col>\n\u003Cli>Client connects to PostgreSQL and receives an OAuth challenge\u003C\u002Fli>\n\u003Cli>Client obtains a JWT access token from the configured identity provider\u003C\u002Fli>\n\u003Cli>Client sends the token to PostgreSQL\u003C\u002Fli>\n\u003Cli>PostgreSQL validates the token signature, issuer, audience, and expiry\u003C\u002Fli>\n\u003Cli>The \u003Ccode>sub\u003C\u002Fcode> (subject) claim is mapped to a PostgreSQL role\u003C\u002Fli>\n\u003C\u002Fol>\n\u003Cp>This is particularly valuable for organizations that have standardized on OAuth\u002FOIDC for all service authentication. Database access can now be managed through the same identity provider as application access, with the same MFA policies, session durations, and audit logs.\u003C\u002Fp>\n\u003Ch2 id=\"temporal-constraints\">Temporal Constraints\u003C\u002Fh2>\n\u003Cp>PostgreSQL 18 introduces temporal PRIMARY KEY, UNIQUE, and FOREIGN KEY constraints for tables with period columns. This brings SQL:2011 temporal features to PostgreSQL, enabling bitemporal data modeling without application-level enforcement.\u003C\u002Fp>\n\u003Cpre>\u003Ccode class=\"language-sql\">CREATE TABLE employee_departments (\n    employee_id INTEGER NOT NULL,\n    department_id INTEGER NOT NULL,\n    valid_from DATE NOT NULL,\n    valid_to DATE NOT NULL,\n    PERIOD FOR valid_period (valid_from, valid_to),\n    -- Temporal PK: no overlapping periods for the same employee\n    PRIMARY KEY (employee_id, valid_period WITHOUT OVERLAPS)\n);\n\nCREATE TABLE salary_history (\n    employee_id INTEGER NOT NULL,\n    salary NUMERIC NOT NULL,\n    valid_from DATE NOT NULL,\n    valid_to DATE NOT NULL,\n    PERIOD FOR valid_period (valid_from, valid_to),\n    -- Temporal FK: salary records must reference a valid department assignment\n    FOREIGN KEY (employee_id, PERIOD valid_period)\n        REFERENCES employee_departments (employee_id, PERIOD valid_period)\n);\n\u003C\u002Fcode>\u003C\u002Fpre>\n\u003Cp>Temporal constraints prevent overlapping periods for the same entity, a common source of bugs in applications that manage time-ranged data (subscriptions, pricing tiers, role assignments, inventory reservations). Previously, this required trigger-based enforcement or exclusion constraints with the btree_gist extension.\u003C\u002Fp>\n\u003Ch2 id=\"old-new-in-returning-clauses\">OLD\u002FNEW in RETURNING Clauses\u003C\u002Fh2>\n\u003Cp>PostgreSQL 18 allows referencing OLD and NEW table values in RETURNING clauses of UPDATE and DELETE statements. This eliminates the need for CTEs or separate queries when you need both the before and after state of modified rows.\u003C\u002Fp>\n\u003Cpre>\u003Ccode class=\"language-sql\">-- Update prices and return both old and new values\nUPDATE products\nSET price_cents = price_cents * 1.1\nWHERE category = 'electronics'\nRETURNING\n    id,\n    OLD.price_cents AS previous_price,\n    NEW.price_cents AS updated_price,\n    name;\n\u003C\u002Fcode>\u003C\u002Fpre>\n\u003Cp>This is invaluable for audit logging, change data capture (CDC), and any workflow where you need to know what changed. Previously, you had to either use a CTE to capture the old values or implement trigger-based auditing.\u003C\u002Fp>\n\u003Ch2 id=\"skip-scan-for-multicolumn-b-tree-indexes\">Skip-Scan for Multicolumn B-tree Indexes\u003C\u002Fh2>\n\u003Cp>PostgreSQL 18 introduces skip-scan optimization for multicolumn B-tree indexes. This allows the planner to efficiently use a composite index even when the leading column is not in the query’s WHERE clause.\u003C\u002Fp>\n\u003Cpre>\u003Ccode class=\"language-sql\">-- Index on (country, city, population)\nCREATE INDEX idx_locations ON locations (country, city, population);\n\n-- PG 17: Full index scan (cannot use index efficiently without 'country')\n-- PG 18: Skip-scan (jumps between distinct 'country' values)\nSELECT * FROM locations WHERE city = 'Jakarta';\n\u003C\u002Fcode>\u003C\u002Fpre>\n\u003Cp>The skip-scan works by identifying distinct values in the leading column(s) and performing a series of targeted lookups for each value. For columns with low cardinality (country, status, type), this is dramatically faster than a full index scan.\u003C\u002Fp>\n\u003Ch3>When Skip-Scan Helps\u003C\u002Fh3>\n\u003Cul>\n\u003Cli>Leading column has low cardinality (&lt; 1000 distinct values)\u003C\u002Fli>\n\u003Cli>You frequently query by non-leading columns of composite indexes\u003C\u002Fli>\n\u003Cli>You have existing composite indexes that serve multiple query patterns\u003C\u002Fli>\n\u003C\u002Ful>\n\u003Cp>Skip-scan eliminates many cases where you previously needed a separate single-column index, reducing index maintenance overhead and storage.\u003C\u002Fp>\n\u003Ch2 id=\"migration-guide-postgresql-17-to-18\">Migration Guide: PostgreSQL 17 to 18\u003C\u002Fh2>\n\u003Ch3>Pre-Upgrade Checklist\u003C\u002Fh3>\n\u003Col>\n\u003Cli>\n\u003Cp>\u003Cstrong>Check extension compatibility.\u003C\u002Fstrong> Run \u003Ccode>SELECT * FROM pg_available_extensions;\u003C\u002Fcode> on a PG 18 test instance. Most popular extensions (PostGIS, pgvector, pg_stat_statements) had PG 18 compatible releases within 2 weeks of launch.\u003C\u002Fp>\n\u003C\u002Fli>\n\u003Cli>\n\u003Cp>\u003Cstrong>Review pg_hba.conf.\u003C\u002Fstrong> The new OAuth method is additive — existing auth configurations continue to work unchanged.\u003C\u002Fp>\n\u003C\u002Fli>\n\u003Cli>\n\u003Cp>\u003Cstrong>Test I\u002FO performance.\u003C\u002Fstrong> The new async I\u002FO subsystem is enabled by default. Run your standard benchmark suite on a test instance to verify performance improvements and check for any regressions in your specific workload.\u003C\u002Fp>\n\u003C\u002Fli>\n\u003Cli>\n\u003Cp>\u003Cstrong>Audit generated columns.\u003C\u002Fstrong> If you plan to convert stored generated columns to virtual, verify that no indexes depend on them.\u003C\u002Fp>\n\u003C\u002Fli>\n\u003Cli>\n\u003Cp>\u003Cstrong>Test application queries.\u003C\u002Fstrong> The skip-scan optimizer change may alter query plans. Review \u003Ccode>EXPLAIN ANALYZE\u003C\u002Fcode> output for your critical queries on a PG 18 test instance.\u003C\u002Fp>\n\u003C\u002Fli>\n\u003C\u002Fol>\n\u003Ch3>Upgrade Methods\u003C\u002Fh3>\n\u003Cp>\u003Cstrong>pg_upgrade (recommended for most):\u003C\u002Fstrong>\u003C\u002Fp>\n\u003Cpre>\u003Ccode class=\"language-bash\"># Stop old server\npg_ctl -D \u002Fvar\u002Flib\u002Fpostgresql\u002F17\u002Fdata stop\n\n# Run upgrade\npg_upgrade \\\n  --old-datadir=\u002Fvar\u002Flib\u002Fpostgresql\u002F17\u002Fdata \\\n  --new-datadir=\u002Fvar\u002Flib\u002Fpostgresql\u002F18\u002Fdata \\\n  --old-bindir=\u002Fusr\u002Flib\u002Fpostgresql\u002F17\u002Fbin \\\n  --new-bindir=\u002Fusr\u002Flib\u002Fpostgresql\u002F18\u002Fbin \\\n  --link  # Use hard links for speed\n\n# Start new server\npg_ctl -D \u002Fvar\u002Flib\u002Fpostgresql\u002F18\u002Fdata start\n\n# Rebuild statistics\nvacuumdb --all --analyze-in-stages\n\u003C\u002Fcode>\u003C\u002Fpre>\n\u003Cp>\u003Cstrong>Logical replication (for zero-downtime):\u003C\u002Fstrong>\nSet up logical replication from PG 17 to PG 18, let it sync, then switch your application connection string. This approach adds complexity but allows rollback by switching back to PG 17.\u003C\u002Fp>\n\u003Cp>\u003Cstrong>Managed services:\u003C\u002Fstrong> AWS RDS, Google Cloud SQL, Azure Database, and Neon all support in-place major version upgrades with minimal downtime. Check your provider’s documentation for PG 18 availability.\u003C\u002Fp>\n\u003Ch3>Post-Upgrade Tasks\u003C\u002Fh3>\n\u003Col>\n\u003Cli>Run \u003Ccode>ANALYZE\u003C\u002Fcode> on all tables to update planner statistics\u003C\u002Fli>\n\u003Cli>Review \u003Ccode>pg_stat_io\u003C\u002Fcode> (new in PG 16, enhanced in PG 18) to verify async I\u002FO is active\u003C\u002Fli>\n\u003Cli>Convert UUIDv4 default generators to uuidv7() where appropriate\u003C\u002Fli>\n\u003Cli>Evaluate stored generated columns for conversion to VIRTUAL\u003C\u002Fli>\n\u003Cli>Monitor query plans for the first week — the skip-scan optimizer may change plans\u003C\u002Fli>\n\u003C\u002Fol>\n\u003Ch2 id=\"faq\">FAQ\u003C\u002Fh2>\n\u003Ch3 id=\"is-postgresql-18-production-ready\">Is PostgreSQL 18 production-ready?\u003C\u002Fh3>\n\u003Cp>Yes. PostgreSQL follows a rigorous release process with multiple beta and RC phases. The .0 release is production-quality. That said, waiting for the .1 patch release (typically 2-3 months after .0) is a common and reasonable strategy for risk-averse organizations.\u003C\u002Fp>\n\u003Ch3 id=\"should-i-switch-from-uuidv4-to-uuidv7-for-existing-tables\">Should I switch from UUIDv4 to UUIDv7 for existing tables?\u003C\u002Fh3>\n\u003Cp>For new tables, use uuidv7() as the default. For existing tables with UUIDv4 primary keys, the migration cost (rewriting the entire table and all referencing foreign keys) rarely justifies the benefit unless you are experiencing measurable index bloat or cache miss issues.\u003C\u002Fp>\n\u003Ch3 id=\"does-the-new-i-o-engine-require-kernel-changes\">Does the new I\u002FO engine require kernel changes?\u003C\u002Fh3>\n\u003Cp>io_uring support requires Linux kernel 5.10 or later (released December 2020). If your kernel is older, PostgreSQL 18 falls back to worker-thread-based async I\u002FO, which still provides improvements over PG 17’s synchronous I\u002FO, but not as dramatic.\u003C\u002Fp>\n\u003Ch3 id=\"can-i-use-virtual-generated-columns-with-pgvector\">Can I use virtual generated columns with pgvector?\u003C\u002Fh3>\n\u003Cp>Not directly. pgvector embeddings are typically stored, not computed, because generating embeddings requires an external model call. However, you can use a virtual generated column for derived metrics like \u003Ccode>vector_dims(embedding)\u003C\u002Fcode> or \u003Ccode>l2_distance(embedding, reference_vector)\u003C\u002Fcode>.\u003C\u002Fp>\n\u003Ch3 id=\"how-do-temporal-constraints-interact-with-partitioning\">How do temporal constraints interact with partitioning?\u003C\u002Fh3>\n\u003Cp>Temporal constraints work with declarative partitioning. You can partition a table by range on the period column and apply temporal PRIMARY KEY constraints. The constraint enforcement is partition-aware — it checks for overlaps across all partitions.\u003C\u002Fp>\n\u003Ch3 id=\"what-happened-to-the-merge-improvements\">What happened to the MERGE improvements?\u003C\u002Fh3>\n\u003Cp>PostgreSQL 18 extends the MERGE statement with RETURNING clause support, completing the feature set introduced in PG 15. You can now use \u003Ccode>MERGE ... RETURNING *\u003C\u002Fcode> to get the affected rows, similar to INSERT\u002FUPDATE\u002FDELETE RETURNING.\u003C\u002Fp>\n","en","b0000000-0000-0000-0000-000000000001",true,"2026-03-28T10:44:36.429098Z","PostgreSQL 18 Deep Dive — uuidv7, Virtual Columns, Async I\u002FO Engine (2025)","Complete guide to PostgreSQL 18 features: async I\u002FO engine (3x faster reads), native uuidv7(), virtual generated columns, OAuth auth, temporal constraints, and skip-scan indexes.","postgresql 18 features",null,"index, follow",[22,27],{"id":23,"name":24,"slug":25,"created_at":26},"c0000000-0000-0000-0000-000000000012","DevOps","devops","2026-03-28T10:44:21.513630Z",{"id":28,"name":29,"slug":30,"created_at":26},"c0000000-0000-0000-0000-000000000005","PostgreSQL","postgresql","Engineering",[33,39,45],{"id":34,"title":35,"slug":36,"excerpt":37,"locale":12,"category_name":31,"published_at":38},"d0200000-0000-0000-0000-000000000003","Why Bali Is Becoming Southeast Asia's Impact-Tech Hub in 2026","why-bali-becoming-southeast-asia-impact-tech-hub-2026","Bali ranks #16 among Southeast Asian startup ecosystems. With a growing concentration of Web3 builders, AI sustainability startups, and eco-travel tech companies, the island is carving a niche as the region's impact-tech capital.","2026-03-28T10:44:37.748283Z",{"id":40,"title":41,"slug":42,"excerpt":43,"locale":12,"category_name":31,"published_at":44},"d0200000-0000-0000-0000-000000000002","ASEAN Data Protection Patchwork: A Developer's Compliance Checklist","asean-data-protection-patchwork-developer-compliance-checklist","Seven ASEAN countries now have comprehensive data protection laws, each with different consent models, localization requirements, and penalty structures. Here is a practical compliance checklist for developers building multi-country applications.","2026-03-28T10:44:37.374741Z",{"id":46,"title":47,"slug":48,"excerpt":49,"locale":12,"category_name":31,"published_at":50},"d0200000-0000-0000-0000-000000000001","Indonesia's $29 Billion Digital Transformation: Opportunities for Software Companies","indonesia-29-billion-digital-transformation-opportunities-software-companies","Indonesia's IT services market is projected to reach $29.03 billion in 2026, up from $24.37 billion in 2025. Cloud infrastructure, AI, e-commerce, and data centers are driving the fastest growth in Southeast Asia.","2026-03-28T10:44:37.349311Z",{"id":13,"name":52,"slug":53,"bio":54,"photo_url":19,"linkedin":19,"role":55,"created_at":56,"updated_at":56},"Open Soft Team","open-soft-team","The engineering team at Open Soft, building premium software solutions from Bali, Indonesia.","Engineering Team","2026-03-28T08:31:22.226811Z"]