All metrics side by side
▼ Lower is better for all latency values — measured in milliseconds.
Latency Comparison
Sequential request latency — p50
▼ Lower is better. Each bar shows the median response time in milliseconds for that operation.
Burst Capacity
Latency under simultaneous load
▼ Lower is better. Each level fires N requests simultaneously. Shows how infrastructure handles sudden spikes.
Throughput
Requests per second at each burst level
▲ Higher is better. Successful requests completed per second at each concurrency level.
Why some platforms are faster: OpenKBS and
v0 both use
Neon Postgres as their database layer.
Neon is hosted on AWS infrastructure by default. OpenKBS runs natively on AWS Lambda in the same region,
which means the network path between compute and database stays within the AWS backbone — minimal hops,
sub-millisecond internal latency. Supabase-based platforms like
Lovable and
Bolt route through Supabase Edge
Functions on Deno Deploy, adding an extra network hop between the edge runtime and the database.
These architectural differences — not code quality — explain most of the latency gap.
About burst concurrency limits: Every platform handles concurrency differently.
v0 runs on Vercel serverless functions, which support up to
1,000 concurrent executions per region —
our 10x burst test was blocked by Vercel's load-test detection, not by an infrastructure ceiling.
Lovable and
Bolt deploy on Supabase Edge Functions,
which scale automatically but showed errors at higher burst levels.
Emergent handled moderate concurrency well but encountered errors at 500 simultaneous requests.
OpenKBS — a newer platform still growing its community —
runs on AWS Lambda, where the default concurrency quota is low. To reach the 500-concurrent burst test,
we requested a free quota increase via
openkbs cloud function concurrency.
This is a standard AWS process available to any Lambda user at no extra cost.
How this was tested: All platforms implemented the same
Notes API specification — CRUD, filtered queries with JOINs, aggregation with GROUP BY,
and burst capacity. Tests ran from a dedicated Hetzner server in Europe using
vibe-bench on Node.js 18+. 20 iterations per sequential test,
burst ramp at 10/50/100/500 simultaneous requests. Full methodology:
methodology.