sst.config.ts provisions Aurora Postgres, an S3 bucket for uploads, an ECS Fargate cluster, a load balancer with WebSocket + SSE + shard ports forwarded, and a CloudFront CDN.
The default shape is stateless: Postgres holds app data + sessions, S3 holds files. The container can scale horizontally without losing state. A working reference config ships in deploy/sst/sst.config.ts.
What you get
- Aurora Serverless v2 Postgres for app data and sessions (auto-scaling 0.5–2 ACU, ~$15/mo minimum)
- S3 bucket for file uploads (linked to the service for IAM)
- ECS Fargate running the Pylon container (0.25 vCPU / 512 MB ~ $9/mo, horizontally scalable)
- Application Load Balancer with WebSocket + SSE + shard port forwarding and sticky sessions
- AWS Secrets Manager for the admin token + OAuth credentials
- CloudFront CDN in front of the ALB
- Route 53 + ACM for custom domains and TLS
Prerequisites
Project layout
The default config — Aurora + S3
file_storage to your pylon.manifest.json so Pylon knows to use S3:
Local development
Mirror the production stack locally so SQL queries, indexes, and policies behave identically. The cheapest way is Postgres in Docker + local-disk file storage — no MinIO needed for dev unless your code exercises S3-specific behavior.docker-compose.dev.yml:
.env file at the project root keeps this out of your shell history:
pylon dev reads .env automatically; restart the dev server when you change it.
To also test against S3 locally, point PYLON_FILES_PROVIDER=s3 at MinIO running in the same docker-compose:
file_storage plugin treats MinIO as a drop-in S3 — same wire protocol.
Set secrets
Before the first deploy, set the secret values via the SST CLI:Deploy
Custom domain
Thedomain block under loadBalancer provisions an ACM certificate and points Route 53 at the ALB. If your DNS lives elsewhere, swap the dns provider:
Multiple environments
removal config to make non-prod environments tear down cleanly:
Alternative: single-replica with local state (EFS + SQLite)
If you genuinely want to run Pylon as a single replica with SQLite + local files (e.g. an internal tool, a hobby app, or a hard requirement to avoid Postgres + S3), you can mount EFS instead of using Aurora and S3:- ✅ ~$10/mo cheaper (no Aurora minimum)
- ✅ One backing service to think about
- ❌ Cannot scale horizontally (SQLite is single-writer; mounting EFS into multiple containers corrupts the DB)
- ❌ EFS latency (~5ms) is meaningfully slower than Aurora (~1ms in-VPC)
- ❌ Backups are your responsibility (Aurora has them built-in)
CDN in front of the ALB
For static-asset caching and global edge presence:Cache-Control headers. WebSocket and SSE traffic should bypass the CDN — point your sync engine’s wsUrl directly at the ALB:
Horizontal scale
The default config already scales horizontally (scaling: { min: 1, max: 4 }). Bump it for read-heavy apps:
- Postgres backend ✅ (DATABASE_URL set, no SQLite)
- External file storage ✅ (S3 via
file_storageplugin) - Sticky WebSocket sessions ✅ (
stickySessions: trueon the load balancer)
cache_client plugin pointed at ElastiCache Redis:
Observability
- Metrics: Pylon exposes
/metricsin Prometheus format. Scrape with AWS Managed Prometheus or Grafana Cloud. - Traces: Add OpenTelemetry exporter env vars and SST will inject the right IAM permissions for X-Ray.
Compared to Pylon Cloud
If you’re not committed to AWS, Pylon Cloud gives you the same managed Postgres + S3 + TLS + WebSocket-aware load balancing with one CLI command (pylon deploy --target cloud) and per-use pricing. SST is the right choice when:
- You need to live in AWS (compliance, existing infra, RI commitments)
- You want to compose Pylon with other AWS services (Bedrock, SageMaker, IoT Core, Kinesis)
- You want full IaC control over networking, IAM, secrets
Troubleshooting
WebSocket connections fail with 504 — ALB idle timeout defaults to 60s. The default config bumps it to 1 hour viaidleTimeout: "3600 seconds".
Cold start takes 10+ minutes the first time — Aurora Serverless v2’s first cold start provisions storage. Subsequent deploys are <2 min.
File uploads fail with 403 — confirm the service has the bucket linked (link: [uploads]) and that your manifest’s file_storage plugin reads ${env.PYLON_S3_BUCKET}. SST emits the bucket name into the env automatically; without link, the IAM permissions to write to it aren’t attached.
Local dev errors connection refused — Postgres isn’t running. docker compose -f docker-compose.dev.yml ps should show the db service Up. If not: docker compose up -d db.
Local dev sessions disappear on restart — sessions are stored in Postgres now (the same DATABASE_URL), so they persist as long as the Postgres volume does. If you docker compose down -v (note -v), the volume is destroyed and sessions clear.
Secrets aren’t visible in the container — make sure you ran sst secret set in the same stage you’re deploying to. Secrets are per-stage.
Cost optimization
| Component | Default cost | Optimization |
|---|---|---|
| Aurora Serverless v2 (0.5 ACU min) | ~$15/mo | Drop min to 0.5 ACU and let it scale up only on load |
| Fargate (0.25 vCPU / 512 MB) | ~$9/mo | Use Fargate Spot for non-prod |
| ALB | ~$16/mo | Single ALB for all stages via shared listeners |
| S3 (bucket + traffic) | ~$0–5/mo | Lifecycle policies for old uploads; CloudFront in front to cache |
| NAT Gateway | ~$32/mo | Skip the NAT (single AZ) for non-prod; use VPC endpoints in prod |
| CloudWatch logs | ~$0.50/GB | Set retention: "7 days" for non-prod |
| Secrets Manager | $0.40/secret/mo | Few secrets; minor cost |
Reference config
The full working config is atdeploy/sst/sst.config.ts. Clone it as a starting point: