Neon vs Turso vs PlanetScale: Choosing a Serverless Database in 2026
Engineering Team
The Short Answer
If you need PostgreSQL compatibility with modern developer experience, choose Neon. If you need sub-10ms reads at the edge with SQLite compatibility, choose Turso. If you are running a MySQL workload and need horizontal sharding, choose PlanetScale. All three are production-ready in 2026, and the choice depends primarily on your SQL dialect preference and deployment topology.
The Serverless Database Landscape in 2026
The serverless database market has matured dramatically since 2023. What began as experimental managed offerings has become the default deployment model for startups and an increasingly common choice for enterprises. The global serverless database market reached $14.2 billion in 2025, growing at 28% CAGR according to Gartner.
Three platforms have emerged as clear leaders, each built on fundamentally different foundations:
- Neon — Serverless PostgreSQL with storage-compute separation, branching, and autoscaling to zero
- Turso — libSQL (SQLite fork) with edge replication, embedded replicas, and per-request routing
- PlanetScale — MySQL-compatible, built on Vitess (the YouTube/Google scaling technology), with schema-safe deployments
These are not interchangeable. Each excels in different architectural contexts, and choosing the wrong one creates friction that compounds over time.
Neon: Serverless PostgreSQL Done Right
Neon is a serverless PostgreSQL platform that separates storage from compute, enabling features impossible in traditional PostgreSQL deployments: instant branching, scale-to-zero, and point-in-time restore at the storage layer.
Architecture
Neon’s architecture splits PostgreSQL into three layers:
- Compute: Standard PostgreSQL instances (currently PG 16 and 17, with PG 18 support announced for Q1 2026) that handle query execution
- Pageserver: A custom storage backend that replaces PostgreSQL’s local file system, storing pages in a tiered format optimized for cloud object storage
- Safekeepers: WAL durability nodes that ensure no committed transaction is lost
This separation means compute can scale independently of storage. A Neon database can scale to zero when idle (paying only for storage) and spin up a compute endpoint in ~500ms when a connection arrives.
Branching: The Killer Feature
Neon’s most distinctive capability is database branching, modeled after Git. Creating a branch is a copy-on-write operation that completes in milliseconds regardless of database size.
# Create a branch from production for testing
neonctl branches create --name feature-auth-redesign --parent main
# Get the connection string for the branch
neonctl connection-string feature-auth-redesign
# Branch is a full PostgreSQL instance with production data
psql $(neonctl connection-string feature-auth-redesign)
Use cases for branching:
- Preview environments: Each pull request gets its own database branch with production data. Test migrations against real data, not empty schemas.
- Safe migrations: Branch from production, run your migration on the branch, verify it works, then apply to production.
- Analytics isolation: Create a branch for heavy analytical queries without impacting production OLTP performance.
- Development: Every developer gets a personal database branch. No more shared dev databases with conflicting schema changes.
Autoscaling
Neon autoscales compute from 0.25 vCPU to 8 vCPU based on load. The scale-to-zero feature is genuine — if no queries arrive for 5 minutes (configurable), the compute shuts down entirely. You pay only for storage during idle periods.
For production workloads, Neon offers always-on compute endpoints that maintain a minimum allocation. The autoscaler responds to load within 2-3 seconds, handling traffic spikes without manual intervention.
Pricing (March 2026)
| Plan | Compute | Storage | Branching | Price |
|---|---|---|---|---|
| Free | 0.25 vCPU, 100 hrs/mo | 512 MB | 10 branches | $0 |
| Launch | Up to 4 vCPU | 10 GB | Unlimited | $19/mo |
| Scale | Up to 8 vCPU | 50 GB | Unlimited | $69/mo |
| Enterprise | Custom | Custom | Unlimited | Custom |
Limitations
- Cold start latency. Scale-to-zero endpoints take 300-700ms to resume. For always-hot applications, keep a minimum compute allocation.
- Extension support. Most popular extensions work (PostGIS, pgvector, pg_stat_statements), but some that require file system access are not supported.
- Region availability. Available in 12 AWS regions and 5 Azure regions as of March 2026. No GCP support yet.
- Connection limit. The built-in connection pooler handles up to 10,000 concurrent connections on the Scale plan.
Turso: Edge-Native SQLite
Turso is built on libSQL, an open-source fork of SQLite that adds server capabilities: replication, access control, and multi-tenancy. Turso’s unique value proposition is edge-native deployment — your database runs in 30+ locations worldwide, with reads served from the nearest edge replica.
Architecture
Turso’s architecture is fundamentally different from Neon and PlanetScale:
- Primary instance: A single-writer libSQL database in your chosen primary region
- Edge replicas: Read-only replicas deployed to edge locations worldwide, synced via a custom replication protocol
- Embedded replicas: libSQL can embed a read replica directly in your application process, enabling zero-latency reads
The embedded replica model is Turso’s most innovative feature. Your application embeds a SQLite-compatible database file that syncs with the primary. Reads hit the local file — no network round-trip. Writes are forwarded to the primary and replicated back.
import { createClient } from '@libsql/client';
const db = createClient({
url: 'file:local-replica.db',
syncUrl: 'libsql://my-db-username.turso.io',
authToken: process.env.TURSO_AUTH_TOKEN,
syncInterval: 60, // Sync every 60 seconds
});
// This read hits the local file — sub-millisecond
const users = await db.execute('SELECT * FROM users WHERE active = 1');
// This write goes to the primary, then replicates back
await db.execute({
sql: 'INSERT INTO events (user_id, type) VALUES (?, ?)',
args: [userId, 'login'],
});
Multi-Tenancy
Turso supports creating thousands of databases per account, each a separate SQLite file. This enables per-tenant database isolation — each customer gets their own database, eliminating noisy-neighbor problems and simplifying data residency compliance.
# Create a database per tenant
turso db create tenant-acme --group edge-us
turso db create tenant-globex --group edge-eu
# Each tenant gets an isolated database with its own URL
Pricing (March 2026)
| Plan | Databases | Storage | Rows read/mo | Rows written/mo | Price |
|---|---|---|---|---|---|
| Starter | 500 | 9 GB | 25 billion | 50 million | $0 |
| Scaler | 10,000 | 24 GB | 100 billion | 100 million | $29/mo |
| Enterprise | Unlimited | Custom | Custom | Custom | Custom |
Best For
- Edge applications: Apps deployed on Cloudflare Workers, Vercel Edge Functions, or Deno Deploy where every millisecond of latency matters
- Per-tenant databases: SaaS applications that need data isolation without the cost of provisioning a full PostgreSQL instance per customer
- Mobile/offline-first apps: Embedded replicas enable offline reads with background sync
- Read-heavy workloads: The embedded replica model delivers sub-millisecond read latency
Limitations
- SQLite compatibility only. No PostgreSQL or MySQL features. If you need JSONB operators, window functions beyond SQLite’s support, or stored procedures, Turso is not the right choice.
- Single-writer. All writes go through the primary. Write throughput is limited to a single libSQL instance (though this is typically 10,000+ writes/sec).
- No JOINs across databases. Multi-tenant isolation means you cannot query across tenants.
- Schema changes. No online DDL — ALTER TABLE locks the database briefly. For large tables, this requires planning.
PlanetScale: MySQL at YouTube Scale
PlanetScale brings Vitess — the sharding middleware that powers YouTube, Slack, and GitHub — to developers as a managed service. It provides MySQL-compatible serverless databases with horizontal sharding, schema-safe deployments, and built-in connection pooling.
Architecture
PlanetScale’s architecture is built on three Vitess components:
- VTGate: A MySQL-compatible proxy that routes queries to the correct shard
- VTTablet: Manages individual MySQL instances (shards)
- VTOrc: Automated failover and topology management
Sharding is transparent to the application. You write standard MySQL queries, and Vitess routes them to the correct shard based on your sharding key configuration.
Safe Schema Changes
PlanetScale’s deploy request workflow prevents schema changes from breaking production:
# Create a branch (similar to Neon, but for schema only)
pscale branch create feature-add-orders
# Apply schema changes to the branch
pscale shell feature-add-orders
mysql> ALTER TABLE orders ADD COLUMN status ENUM('pending', 'shipped', 'delivered');
# Create a deploy request (like a PR for your schema)
pscale deploy-request create feature-add-orders
# Review the schema diff
pscale deploy-request diff feature-add-orders 1
# Deploy to production (non-blocking, online DDL)
pscale deploy-request deploy feature-add-orders 1
Schema changes are applied using online DDL (gh-ost under the hood), meaning no table locks during ALTER TABLE operations. This is critical for large tables where a traditional ALTER TABLE could lock the table for hours.
Pricing (March 2026)
| Plan | Storage | Row reads/mo | Row writes/mo | Connections | Price |
|---|---|---|---|---|---|
| Hobby | 5 GB | 1 billion | 10 million | 1,000 | $0 |
| Scaler | 10 GB | 100 billion | 50 million | 10,000 | $29/mo |
| Scaler Pro | 128 GB | Unlimited | 200 million | 20,000 | $99/mo |
| Enterprise | Custom | Custom | Custom | Custom | Custom |
Best For
- MySQL-native teams: If your team’s expertise is MySQL, PlanetScale provides the best serverless MySQL experience without learning a new database.
- Horizontal scaling needs: Applications that need to scale beyond a single server — PlanetScale handles sharding transparently.
- Large-table DDL: The online DDL system is the most battle-tested in the industry (Vitess runs YouTube’s databases).
- Connection pooling at scale: Vitess’s connection pooling handles tens of thousands of connections efficiently.
Limitations
- No foreign keys (enforced by the database). PlanetScale historically required application-level FK enforcement due to Vitess sharding constraints. They have added limited FK support in 2025, but it remains a constraint for complex relational models.
- MySQL only. No PostgreSQL compatibility. If you need PostGIS, pgvector, or PostgreSQL-specific features, PlanetScale is not an option.
- No self-hosting option. Unlike Neon (which has a local emulator) and Turso (which is open-source), PlanetScale is fully managed only.
- Regions: Available in AWS us-east-1, us-west-2, eu-west-1, ap-southeast-1, and ap-northeast-1.
Feature Comparison Table
| Feature | Neon | Turso | PlanetScale |
|---|---|---|---|
| SQL dialect | PostgreSQL | SQLite (libSQL) | MySQL |
| Scale to zero | Yes (300-700ms resume) | Yes (instant) | No (always-on) |
| Branching | Full data branches | Schema + data | Schema-only deploy requests |
| Edge replicas | No (single region + read replicas) | Yes (30+ locations) | No (single region) |
| Embedded replicas | No | Yes (zero-latency reads) | No |
| Horizontal sharding | No | No | Yes (Vitess) |
| Online DDL | Standard PG (with locks) | Brief locks | gh-ost (zero locks) |
| Extensions/plugins | PostgreSQL extensions | libSQL extensions | MySQL plugins (limited) |
| Vector search | pgvector | Via extension | No native support |
| Connection pooling | Built-in (pgbouncer-compatible) | N/A (HTTP + embedded) | Built-in (Vitess) |
| Multi-tenant isolation | Separate databases | Per-tenant databases | Separate databases |
| Open source | Neon (Apache 2.0) | libSQL (MIT) | Vitess (Apache 2.0) |
| Free tier | 512 MB, 100 compute-hrs | 9 GB, 500 databases | 5 GB, 1B reads |
| Latency (p50 read) | 5-15ms (same region) | <1ms (embedded), 5-15ms (edge) | 3-10ms (same region) |
Decision Framework: When to Use Each
Choose Neon when:
- You need PostgreSQL compatibility (extensions, JSONB, PostGIS, pgvector)
- Database branching for preview environments is important to your workflow
- You want scale-to-zero for development and staging environments
- Your application is deployed in a single region or uses regional read replicas
- You are migrating from a traditional PostgreSQL deployment
Choose Turso when:
- You deploy on edge runtimes (Cloudflare Workers, Deno Deploy, Vercel Edge)
- Sub-millisecond read latency is a requirement (embedded replicas)
- You need per-tenant database isolation for a multi-tenant SaaS
- Your workload is read-heavy (95%+ reads)
- You are building mobile or offline-first applications
- SQLite compatibility is sufficient for your data model
Choose PlanetScale when:
- Your team is MySQL-native and you do not want to switch SQL dialects
- You need horizontal sharding for tables with billions of rows
- Zero-downtime schema migrations are critical (online DDL)
- You need to handle tens of thousands of concurrent connections
- You are migrating from a self-managed MySQL or Aurora deployment
Real-World Architecture Examples
SaaS with Neon
A B2B SaaS application with 200 tenants, each generating 10-50 GB of data. Use Neon with a single database, row-level security for tenant isolation, and branching for staging environments. PostgreSQL’s JSONB and pgvector support enable both structured data and AI features without additional databases.
Edge Commerce with Turso
A global e-commerce product catalog serving 50 countries. Use Turso with embedded replicas in each edge function. Product reads (99% of traffic) hit the local SQLite replica with sub-millisecond latency. Cart updates and orders write to the primary in us-east-1 with 50-150ms latency.
High-Write Analytics with PlanetScale
An analytics platform ingesting 500,000 events/second from mobile SDKs. Use PlanetScale with horizontal sharding by customer_id. Vitess distributes writes across 16 shards, each handling ~31,000 writes/sec. The online DDL system allows adding new event columns without downtime.
FAQ
Can I migrate between these platforms?
Yes, but it is not trivial. Neon-to-PlanetScale or vice versa requires a SQL dialect migration (PostgreSQL to MySQL). Neon-to-Turso requires migrating from PostgreSQL to SQLite, which may lose features (stored procedures, complex types). Plan for 2-4 weeks of migration effort for a production application.
Which is cheapest for a small project?
All three have generous free tiers. Turso’s free tier is the most generous (9 GB storage, 500 databases). Neon’s free tier is most constrained by compute hours (100/month). For hobby projects, all three are effectively free.
Do any of these replace Redis for caching?
Turso’s embedded replicas can replace Redis for read caching scenarios where data is stored in the database anyway. Instead of cache-aside pattern (read DB, write Redis, read Redis), you read the embedded SQLite replica directly. This eliminates cache invalidation complexity at the cost of slightly higher latency than Redis for hot keys.
How do these compare to Supabase?
Supabase is a broader platform (auth, storage, realtime, edge functions) built on PostgreSQL. Neon is a focused serverless PostgreSQL offering. If you need the full Supabase platform, use Supabase. If you need the best serverless PostgreSQL with branching and autoscaling, use Neon. Supabase actually uses Neon’s technology for some of its managed offerings.
Which handles the most concurrent connections?
PlanetScale, due to Vitess’s connection multiplexing. It can handle 100,000+ application connections multiplexed to a much smaller number of database connections. Neon supports up to 10,000 on the Scale plan. Turso uses HTTP connections (not persistent TCP), so the concept is different — it handles millions of requests per second at the edge.