Cold starts, concurrency limits, database throughput, CDN performance. Six platforms. Real infrastructure. No marketing.
Last updated April 28, 2026 · 200 test runs per platform
Most platforms fail between 100 and 200 concurrent users. We ramped traffic linearly from 10 to 1,000 users over 30 minutes. Four of six platforms hit 30%+ error rates before reaching 200. One platform handled all 1,000 with a 9% error rate and sub-1.5s p95 latency.
Cold starts range from 290ms to 1.1 seconds at p50. The gap widens at the tail. At p99, the slowest platform takes 2.8 seconds for the first request — long enough for a user to close the tab. The fastest recovers in 720ms even at p99.
Two platforms ship apps without a real database. SQLite on a single file. No connection pooling, no concurrent reads, no replication. Fine for a demo. Fragile for anything with users.
Time from first request to first byte after inactivity. The first impression your app makes.
What happens when 10, 100, or 1,000 users hit your app at the same time. Where each platform breaks.
Query latency under load, connection pooling, max concurrent connections. Real databases vs. SQLite.
We deploy the same application — a CRUD task manager with authentication, real-time collaboration, and Postgres-backed storage — on every platform. Same features, same complexity. Then we run standardized load tests from us-east-1 using k6, measuring cold starts, concurrent request capacity, database query latency, and error rates. 200 test runs per metric, per platform. Full methodology →