Skip to main content
Numbers are per-process throughput — SQLite is single-writer, so vertical scaling of writes is bounded. Reads scale across a connection pool.
The values in the tables below were measured on earlier reference hardware (a 2024 Apple M-series laptop, 16 GB RAM) and are pending re-measurement on the current reference machine (2025 Mac Studio M3 Ultra, 96 GB). Expect the M3 Ultra to land meaningfully higher — treat these as a lower bound until refreshed.
Re-run with:
cargo bench -p pylon-runtime --bench bench
cargo bench -p pylon-runtime --bench realtime_bench

Data plane (single-writer SQLite)

OperationOps/secPer op
insert (User, 3 fields)68,00014.6µs
insert (Todo, 4 fields)77,00013.0µs
update89,00011.2µs
delete + reinsert40,00024.7µs
get_by_id519,0001.9µs
lookup by unique field484,0002.1µs
query_filtered (equality)24,00040.8µs
query_filtered ($like)10,00096.9µs
list (1000 rows)2,700363µs
query_graph (no filter, 1000 rows)1,500660µs

Realtime path

OperationOps/secPer op
change_log.append5M198ns
change_log.pull(100)85,00011.7µs
ws_hub.broadcast (enqueue)30,00032.5µs
The WS hub broadcast number is enqueue-side: it fans out to 16 shard worker threads that each push to connected clients. Real delivery rate depends on client count, message size, and TCP send buffers.

What these numbers mean for deploy sizing

Small (1 vCPU, 1 GB RAM, ~$5/mo VPS)

  • Up to ~20k writes/minute sustained, or bursts to 30k/minute
  • Up to ~10k concurrent WS connections (64 KB stack per reader thread)
  • Good for a few thousand active users at webapp levels of chattiness

Medium (2 vCPU, 4 GB RAM, ~$25/mo VPS)

  • Up to ~50k writes/minute sustained
  • Up to ~40k concurrent WS connections
  • Good for 50k active users; room for complex queries without eviction

Large (4+ vCPU, 8+ GB RAM)

  • Write ceiling is still single-writer SQLite (~70k inserts/sec peak). If you’re pinned on writes, move to Postgres (postgres-live feature) or shard the app across databases.
  • Reads scale with the read-connection pool. 4 pool connections × 500k reads/sec = 2M reads/sec ceiling.

When to switch backends

You’re on SQLite and need Postgres when

  • Sustained write rate > 50k/sec (you’re at SQLite’s single-writer limit)
  • Multiple processes need to write (replicas, HA failover)
  • You need online DDL / zero-downtime migrations at scale
  • Storage > 100 GB (not a hard limit, but WAL checkpoints get painful)

You can stay on SQLite when

  • Single-process deployment
  • Full DB fits comfortably in RAM for the read pool
  • You back up with pylon backup on a schedule

On Pylon Cloud

Cloud autoscales the HTTP, WebSocket, and shard tiers separately. You don’t pre-provision — you pay for what you use. The numbers above are useful for self-hosted capacity planning; on Cloud you don’t think about them. For multiplayer apps with sticky shard connections, latency to your players matters — set the workspace region accordingly when signing up.

What’s NOT measured here

  • Multi-client read contention (connection-pool fair-share)
  • TLS handshake cost (reverse proxy terminates TLS)
  • Network RTT — production numbers will be bounded by network first
  • Shard tick budget for realtime game state — depends on SimState::tick
For a real capacity estimate under your workload, run pylon bench against a representative fixture and the manifest you’ll ship with.