Ir al contenido principal
EngineeringMar 28, 2026

Neon vs Turso vs PlanetScale: Choosing a Serverless Database in 2026

OS
Open Soft Team

Engineering Team

The Short Answer

If you need PostgreSQL compatibility with modern developer experience, choose Neon. If you need sub-10ms reads at the edge with SQLite compatibility, choose Turso. If you are running a MySQL workload and need horizontal sharding, choose PlanetScale. All three are production-ready in 2026, and the choice depends primarily on your SQL dialect preference and deployment topology.

The Serverless Database Landscape in 2026

The serverless database market has matured dramatically since 2023. What began as experimental managed offerings has become the default deployment model for startups and an increasingly common choice for enterprises. The global serverless database market reached $14.2 billion in 2025, growing at 28% CAGR according to Gartner.

Three platforms have emerged as clear leaders, each built on fundamentally different foundations:

  • Neon — Serverless PostgreSQL with storage-compute separation, branching, and autoscaling to zero
  • Turso — libSQL (SQLite fork) with edge replication, embedded replicas, and per-request routing
  • PlanetScale — MySQL-compatible, built on Vitess (the YouTube/Google scaling technology), with schema-safe deployments

These are not interchangeable. Each excels in different architectural contexts, and choosing the wrong one creates friction that compounds over time.

Neon: Serverless PostgreSQL Done Right

Neon is a serverless PostgreSQL platform that separates storage from compute, enabling features impossible in traditional PostgreSQL deployments: instant branching, scale-to-zero, and point-in-time restore at the storage layer.

Architecture

Neon’s architecture splits PostgreSQL into three layers:

  1. Compute: Standard PostgreSQL instances (currently PG 16 and 17, with PG 18 support announced for Q1 2026) that handle query execution
  2. Pageserver: A custom storage backend that replaces PostgreSQL’s local file system, storing pages in a tiered format optimized for cloud object storage
  3. Safekeepers: WAL durability nodes that ensure no committed transaction is lost

This separation means compute can scale independently of storage. A Neon database can scale to zero when idle (paying only for storage) and spin up a compute endpoint in ~500ms when a connection arrives.

Branching: The Killer Feature

Neon’s most distinctive capability is database branching, modeled after Git. Creating a branch is a copy-on-write operation that completes in milliseconds regardless of database size.

# Create a branch from production for testing
neonctl branches create --name feature-auth-redesign --parent main

# Get the connection string for the branch
neonctl connection-string feature-auth-redesign

# Branch is a full PostgreSQL instance with production data
psql $(neonctl connection-string feature-auth-redesign)

Use cases for branching:

  • Preview environments: Each pull request gets its own database branch with production data. Test migrations against real data, not empty schemas.
  • Safe migrations: Branch from production, run your migration on the branch, verify it works, then apply to production.
  • Analytics isolation: Create a branch for heavy analytical queries without impacting production OLTP performance.
  • Development: Every developer gets a personal database branch. No more shared dev databases with conflicting schema changes.

Autoscaling

Neon autoscales compute from 0.25 vCPU to 8 vCPU based on load. The scale-to-zero feature is genuine — if no queries arrive for 5 minutes (configurable), the compute shuts down entirely. You pay only for storage during idle periods.

For production workloads, Neon offers always-on compute endpoints that maintain a minimum allocation. The autoscaler responds to load within 2-3 seconds, handling traffic spikes without manual intervention.

Pricing (March 2026)

PlanComputeStorageBranchingPrice
Free0.25 vCPU, 100 hrs/mo512 MB10 branches$0
LaunchUp to 4 vCPU10 GBUnlimited$19/mo
ScaleUp to 8 vCPU50 GBUnlimited$69/mo
EnterpriseCustomCustomUnlimitedCustom

Limitations

  • Cold start latency. Scale-to-zero endpoints take 300-700ms to resume. For always-hot applications, keep a minimum compute allocation.
  • Extension support. Most popular extensions work (PostGIS, pgvector, pg_stat_statements), but some that require file system access are not supported.
  • Region availability. Available in 12 AWS regions and 5 Azure regions as of March 2026. No GCP support yet.
  • Connection limit. The built-in connection pooler handles up to 10,000 concurrent connections on the Scale plan.

Turso: Edge-Native SQLite

Turso is built on libSQL, an open-source fork of SQLite that adds server capabilities: replication, access control, and multi-tenancy. Turso’s unique value proposition is edge-native deployment — your database runs in 30+ locations worldwide, with reads served from the nearest edge replica.

Architecture

Turso’s architecture is fundamentally different from Neon and PlanetScale:

  1. Primary instance: A single-writer libSQL database in your chosen primary region
  2. Edge replicas: Read-only replicas deployed to edge locations worldwide, synced via a custom replication protocol
  3. Embedded replicas: libSQL can embed a read replica directly in your application process, enabling zero-latency reads

The embedded replica model is Turso’s most innovative feature. Your application embeds a SQLite-compatible database file that syncs with the primary. Reads hit the local file — no network round-trip. Writes are forwarded to the primary and replicated back.

import { createClient } from '@libsql/client';

const db = createClient({
  url: 'file:local-replica.db',
  syncUrl: 'libsql://my-db-username.turso.io',
  authToken: process.env.TURSO_AUTH_TOKEN,
  syncInterval: 60, // Sync every 60 seconds
});

// This read hits the local file — sub-millisecond
const users = await db.execute('SELECT * FROM users WHERE active = 1');

// This write goes to the primary, then replicates back
await db.execute({
  sql: 'INSERT INTO events (user_id, type) VALUES (?, ?)',
  args: [userId, 'login'],
});

Multi-Tenancy

Turso supports creating thousands of databases per account, each a separate SQLite file. This enables per-tenant database isolation — each customer gets their own database, eliminating noisy-neighbor problems and simplifying data residency compliance.

# Create a database per tenant
turso db create tenant-acme --group edge-us
turso db create tenant-globex --group edge-eu

# Each tenant gets an isolated database with its own URL

Pricing (March 2026)

PlanDatabasesStorageRows read/moRows written/moPrice
Starter5009 GB25 billion50 million$0
Scaler10,00024 GB100 billion100 million$29/mo
EnterpriseUnlimitedCustomCustomCustomCustom

Best For

  • Edge applications: Apps deployed on Cloudflare Workers, Vercel Edge Functions, or Deno Deploy where every millisecond of latency matters
  • Per-tenant databases: SaaS applications that need data isolation without the cost of provisioning a full PostgreSQL instance per customer
  • Mobile/offline-first apps: Embedded replicas enable offline reads with background sync
  • Read-heavy workloads: The embedded replica model delivers sub-millisecond read latency

Limitations

  • SQLite compatibility only. No PostgreSQL or MySQL features. If you need JSONB operators, window functions beyond SQLite’s support, or stored procedures, Turso is not the right choice.
  • Single-writer. All writes go through the primary. Write throughput is limited to a single libSQL instance (though this is typically 10,000+ writes/sec).
  • No JOINs across databases. Multi-tenant isolation means you cannot query across tenants.
  • Schema changes. No online DDL — ALTER TABLE locks the database briefly. For large tables, this requires planning.

PlanetScale: MySQL at YouTube Scale

PlanetScale brings Vitess — the sharding middleware that powers YouTube, Slack, and GitHub — to developers as a managed service. It provides MySQL-compatible serverless databases with horizontal sharding, schema-safe deployments, and built-in connection pooling.

Architecture

PlanetScale’s architecture is built on three Vitess components:

  1. VTGate: A MySQL-compatible proxy that routes queries to the correct shard
  2. VTTablet: Manages individual MySQL instances (shards)
  3. VTOrc: Automated failover and topology management

Sharding is transparent to the application. You write standard MySQL queries, and Vitess routes them to the correct shard based on your sharding key configuration.

Safe Schema Changes

PlanetScale’s deploy request workflow prevents schema changes from breaking production:

# Create a branch (similar to Neon, but for schema only)
pscale branch create feature-add-orders

# Apply schema changes to the branch
pscale shell feature-add-orders
mysql> ALTER TABLE orders ADD COLUMN status ENUM('pending', 'shipped', 'delivered');

# Create a deploy request (like a PR for your schema)
pscale deploy-request create feature-add-orders

# Review the schema diff
pscale deploy-request diff feature-add-orders 1

# Deploy to production (non-blocking, online DDL)
pscale deploy-request deploy feature-add-orders 1

Schema changes are applied using online DDL (gh-ost under the hood), meaning no table locks during ALTER TABLE operations. This is critical for large tables where a traditional ALTER TABLE could lock the table for hours.

Pricing (March 2026)

PlanStorageRow reads/moRow writes/moConnectionsPrice
Hobby5 GB1 billion10 million1,000$0
Scaler10 GB100 billion50 million10,000$29/mo
Scaler Pro128 GBUnlimited200 million20,000$99/mo
EnterpriseCustomCustomCustomCustomCustom

Best For

  • MySQL-native teams: If your team’s expertise is MySQL, PlanetScale provides the best serverless MySQL experience without learning a new database.
  • Horizontal scaling needs: Applications that need to scale beyond a single server — PlanetScale handles sharding transparently.
  • Large-table DDL: The online DDL system is the most battle-tested in the industry (Vitess runs YouTube’s databases).
  • Connection pooling at scale: Vitess’s connection pooling handles tens of thousands of connections efficiently.

Limitations

  • No foreign keys (enforced by the database). PlanetScale historically required application-level FK enforcement due to Vitess sharding constraints. They have added limited FK support in 2025, but it remains a constraint for complex relational models.
  • MySQL only. No PostgreSQL compatibility. If you need PostGIS, pgvector, or PostgreSQL-specific features, PlanetScale is not an option.
  • No self-hosting option. Unlike Neon (which has a local emulator) and Turso (which is open-source), PlanetScale is fully managed only.
  • Regions: Available in AWS us-east-1, us-west-2, eu-west-1, ap-southeast-1, and ap-northeast-1.

Feature Comparison Table

FeatureNeonTursoPlanetScale
SQL dialectPostgreSQLSQLite (libSQL)MySQL
Scale to zeroYes (300-700ms resume)Yes (instant)No (always-on)
BranchingFull data branchesSchema + dataSchema-only deploy requests
Edge replicasNo (single region + read replicas)Yes (30+ locations)No (single region)
Embedded replicasNoYes (zero-latency reads)No
Horizontal shardingNoNoYes (Vitess)
Online DDLStandard PG (with locks)Brief locksgh-ost (zero locks)
Extensions/pluginsPostgreSQL extensionslibSQL extensionsMySQL plugins (limited)
Vector searchpgvectorVia extensionNo native support
Connection poolingBuilt-in (pgbouncer-compatible)N/A (HTTP + embedded)Built-in (Vitess)
Multi-tenant isolationSeparate databasesPer-tenant databasesSeparate databases
Open sourceNeon (Apache 2.0)libSQL (MIT)Vitess (Apache 2.0)
Free tier512 MB, 100 compute-hrs9 GB, 500 databases5 GB, 1B reads
Latency (p50 read)5-15ms (same region)<1ms (embedded), 5-15ms (edge)3-10ms (same region)

Decision Framework: When to Use Each

Choose Neon when:

  • You need PostgreSQL compatibility (extensions, JSONB, PostGIS, pgvector)
  • Database branching for preview environments is important to your workflow
  • You want scale-to-zero for development and staging environments
  • Your application is deployed in a single region or uses regional read replicas
  • You are migrating from a traditional PostgreSQL deployment

Choose Turso when:

  • You deploy on edge runtimes (Cloudflare Workers, Deno Deploy, Vercel Edge)
  • Sub-millisecond read latency is a requirement (embedded replicas)
  • You need per-tenant database isolation for a multi-tenant SaaS
  • Your workload is read-heavy (95%+ reads)
  • You are building mobile or offline-first applications
  • SQLite compatibility is sufficient for your data model

Choose PlanetScale when:

  • Your team is MySQL-native and you do not want to switch SQL dialects
  • You need horizontal sharding for tables with billions of rows
  • Zero-downtime schema migrations are critical (online DDL)
  • You need to handle tens of thousands of concurrent connections
  • You are migrating from a self-managed MySQL or Aurora deployment

Real-World Architecture Examples

SaaS with Neon

A B2B SaaS application with 200 tenants, each generating 10-50 GB of data. Use Neon with a single database, row-level security for tenant isolation, and branching for staging environments. PostgreSQL’s JSONB and pgvector support enable both structured data and AI features without additional databases.

Edge Commerce with Turso

A global e-commerce product catalog serving 50 countries. Use Turso with embedded replicas in each edge function. Product reads (99% of traffic) hit the local SQLite replica with sub-millisecond latency. Cart updates and orders write to the primary in us-east-1 with 50-150ms latency.

High-Write Analytics with PlanetScale

An analytics platform ingesting 500,000 events/second from mobile SDKs. Use PlanetScale with horizontal sharding by customer_id. Vitess distributes writes across 16 shards, each handling ~31,000 writes/sec. The online DDL system allows adding new event columns without downtime.

FAQ

Can I migrate between these platforms?

Yes, but it is not trivial. Neon-to-PlanetScale or vice versa requires a SQL dialect migration (PostgreSQL to MySQL). Neon-to-Turso requires migrating from PostgreSQL to SQLite, which may lose features (stored procedures, complex types). Plan for 2-4 weeks of migration effort for a production application.

Which is cheapest for a small project?

All three have generous free tiers. Turso’s free tier is the most generous (9 GB storage, 500 databases). Neon’s free tier is most constrained by compute hours (100/month). For hobby projects, all three are effectively free.

Do any of these replace Redis for caching?

Turso’s embedded replicas can replace Redis for read caching scenarios where data is stored in the database anyway. Instead of cache-aside pattern (read DB, write Redis, read Redis), you read the embedded SQLite replica directly. This eliminates cache invalidation complexity at the cost of slightly higher latency than Redis for hot keys.

How do these compare to Supabase?

Supabase is a broader platform (auth, storage, realtime, edge functions) built on PostgreSQL. Neon is a focused serverless PostgreSQL offering. If you need the full Supabase platform, use Supabase. If you need the best serverless PostgreSQL with branching and autoscaling, use Neon. Supabase actually uses Neon’s technology for some of its managed offerings.

Which handles the most concurrent connections?

PlanetScale, due to Vitess’s connection multiplexing. It can handle 100,000+ application connections multiplexed to a much smaller number of database connections. Neon supports up to 10,000 on the Scale plan. Turso uses HTTP connections (not persistent TCP), so the concept is different — it handles millions of requests per second at the edge.