Best Backend Hosting for Node.js (2026)
A practical comparison of Node.js hosting platforms including AWS, Render, Railway, Fly.io, DigitalOcean, and Hetzner with real pricing, benchmarks, and deployment configurations.
Infrastructure engineer with 10+ years building production systems on AWS, GCP,…

The Node.js Hosting Landscape Has Shifted
Five years ago, deploying a Node.js backend meant choosing between AWS EC2, Heroku, or maybe DigitalOcean. In 2026, the landscape looks completely different. Heroku's free tier is long gone. AWS has added App Runner. Railway, Render, and Fly.io have matured into serious contenders. And Kubernetes-based platforms have gotten dramatically easier to use.
I've deployed production Node.js apps on every platform in this guide. Some of these deployments serve millions of requests per month. Others handle niche workloads where cold start latency or WebSocket support makes or breaks the product. This isn't a surface-level comparison -- it's based on real bills, real latency numbers, and real operational headaches I've dealt with over the past decade.
What Is Backend Hosting for Node.js?
Definition: Backend hosting for Node.js refers to the infrastructure services that run your server-side JavaScript applications. This includes compute (CPU and RAM), networking (load balancing, DNS, TLS), persistent storage (databases, volumes), and deployment tooling (CI/CD, rollbacks). Hosting ranges from fully managed PaaS (Platform as a Service) solutions like Render to raw IaaS (Infrastructure as a Service) like AWS EC2 where you manage everything yourself.
The right hosting choice depends on your team size, traffic volume, budget, and operational tolerance. A solo developer shipping a SaaS MVP has very different needs from a team of 15 running a high-throughput API. This guide covers both ends of that spectrum.
Complete Pricing Comparison (2026)
This table compares the monthly cost of running a production Node.js backend with 2 vCPUs, 4 GB RAM, and roughly 500 GB of bandwidth. These are real prices as of March 2026 -- not "starting from" marketing numbers.
| Platform | Plan / Instance | Monthly Cost | vCPUs | RAM | Bandwidth | Free Tier |
|---|---|---|---|---|---|---|
| AWS EC2 | t4g.medium (On-Demand) | $30.37 | 2 | 4 GB | 100 GB free, then $0.09/GB | 750 hrs/mo (12 months) |
| AWS App Runner | 2 vCPU / 4 GB | ~$43 | 2 | 4 GB | Included | None |
| Render | Standard | $25 | 2 | 4 GB | 100 GB included | Free (750 hrs, auto-sleep) |
| Railway | Pro | $20 + usage | Shared (up to 8) | Up to 8 GB | $0.10/GB | $5 free credit/month |
| Fly.io | performance-2x | $31 | 2 | 4 GB | 100 GB free, then $0.02/GB | 3 shared VMs free |
| DigitalOcean App Platform | Professional-M | $24 | 2 | 4 GB | Included | $200 credit (60 days) |
| DigitalOcean Droplet | Premium Intel | $28 | 2 | 4 GB | 4 TB included | $200 credit (60 days) |
| Hetzner Cloud | CPX31 | $14.76 | 4 | 8 GB | 20 TB included | None |
| Google Cloud Run | 2 vCPU / 4 GB (always-on) | ~$47 | 2 | 4 GB | 1 GB free, then $0.12/GB | 2M requests/mo free |
Hetzner is the clear price leader -- you get double the specs for half the cost. But price isn't everything. Let's break down what each platform actually delivers.
Warning: Railway's usage-based pricing can surprise you. Unlike fixed plans, you're billed per vCPU-minute and per GB-minute of RAM. A sustained workload using 2 vCPUs and 4 GB RAM on Railway Pro costs roughly $35-50/month, not the $20 base price. Always estimate your sustained usage before committing.
Platform Deep Dives
AWS (EC2, App Runner, ECS Fargate)
AWS gives you maximum control and the broadest ecosystem. EC2 is the most cost-effective option if your team can manage instances, security patches, and scaling. App Runner is AWS's answer to Render and Railway -- push code, get a URL -- but it's more expensive and less polished. ECS Fargate sits in between: container-based, auto-scaling, but requires understanding task definitions and service configurations.
Best for: teams already on AWS, apps needing tight integration with RDS/SQS/ElastiCache, or workloads requiring reserved instance pricing (up to 72% savings on EC2).
Render
Render is the modern Heroku replacement that actually works. Git push to deploy, automatic HTTPS, managed Postgres, and a clean dashboard. Their Node.js support is first-class -- native builds with Node 22, automatic package manager detection, and zero-config health checks. The free tier auto-sleeps after 15 minutes of inactivity, so it's unsuitable for production but fine for staging.
Best for: small teams shipping fast, MVPs, and SaaS products under 10K daily active users.
Railway
Railway's DX (developer experience) is the best in class. The CLI is excellent, the dashboard shows real-time logs and metrics, and provisioning a Postgres or Redis instance takes one click. Railway detects your Node.js version from package.json engines field and builds accordingly. The usage-based model means you only pay for what you use, which is ideal for bursty workloads but unpredictable for steady-state apps.
Best for: developers who want the fastest path from code to production, side projects, and microservices with variable traffic.
Fly.io
Fly.io runs your app as a micro-VM (Firecracker) close to your users. You can deploy to 35+ regions from a single fly.toml config. This is the platform for latency-sensitive APIs, WebSocket-heavy apps, and anything that benefits from edge deployment. Fly also handles persistent volumes, private networking between services, and built-in Postgres (managed by you, not them).
Best for: global APIs, real-time apps (chat, gaming, collaboration), and teams that want multi-region without Kubernetes.
DigitalOcean
DigitalOcean's App Platform is a solid PaaS -- not as polished as Render but cheaper at scale. For more control, Droplets give you a full VM with generous bandwidth (4-8 TB). The managed Kubernetes service (DOKS) is one of the most affordable ways to run Kubernetes in production. DigitalOcean's pricing is predictable and transparent, which matters more than most developers realize.
Best for: bootstrapped startups, teams wanting a middle ground between PaaS convenience and IaaS control.
Hetzner Cloud
The best value in cloud hosting, period. A CPX31 (4 vCPU, 8 GB RAM, 160 GB NVMe, 20 TB bandwidth) costs EUR 13.49/month. That's roughly what Render charges for 1 vCPU and 512 MB RAM. The catch: Hetzner is IaaS. You manage the OS, runtime, deployments, and security yourself. Their data centers are in Germany, Finland, and the US (Ashburn and Hillsboro).
Best for: cost-conscious teams comfortable with server management, European-focused apps, and high-bandwidth workloads.
How to Deploy a Node.js App: Step by Step
Here's how to deploy a production Node.js API on the four most popular platforms. Each example assumes you have an Express app listening on process.env.PORT.
Step 1: Prepare Your Application
Regardless of platform, your Node.js app needs these fundamentals in package.json:
{
"name": "my-api",
"version": "1.0.0",
"engines": {
"node": ">=22.0.0"
},
"scripts": {
"start": "node dist/server.js",
"build": "tsc"
}
}
Step 2: Add a Health Check Endpoint
Every hosting platform uses health checks to determine if your instance is alive. Without one, your deployments will fail or your containers will restart endlessly.
app.get('/health', (req, res) => {
res.status(200).json({
status: 'ok',
uptime: process.uptime(),
timestamp: Date.now(),
});
});
Step 3: Configure Platform-Specific Files
Render -- create render.yaml:
services:
- type: web
runtime: node
name: my-api
plan: standard
buildCommand: npm ci && npm run build
startCommand: npm start
envVars:
- key: NODE_ENV
value: production
- key: DATABASE_URL
fromDatabase:
name: my-db
property: connectionString
healthCheckPath: /health
Railway -- create railway.toml:
[build]
builder = "nixpacks"
[deploy]
startCommand = "npm start"
healthcheckPath = "/health"
healthcheckTimeout = 30
restartPolicyType = "on_failure"
restartPolicyMaxRetries = 5
Fly.io -- create fly.toml:
app = "my-api"
primary_region = "iad"
[build]
[build.args]
NODE_VERSION = "22"
[http_service]
internal_port = 3000
force_https = true
auto_stop_machines = "stop"
auto_start_machines = true
min_machines_running = 1
[http_service.concurrency]
type = "requests"
hard_limit = 250
soft_limit = 200
[[vm]]
size = "performance-2x"
AWS App Runner -- create apprunner.yaml:
version: 1.0
runtime: nodejs22
build:
commands:
build:
- npm ci
- npm run build
run:
command: npm start
network:
port: 3000
env:
- name: NODE_ENV
value: production
Step 4: Deploy and Verify
Push to your connected Git repository or use the platform CLI:
# Render -- auto-deploys on git push, or manually:
render deploy
# Railway
railway up
# Fly.io
fly deploy
# AWS App Runner (via CLI)
aws apprunner create-service \
--service-name my-api \
--source-configuration file://apprunner-source.json
Step 5: Set Up Monitoring and Alerts
Don't skip this. At minimum, set up uptime monitoring with a tool like BetterStack (formerly Better Uptime) or UptimeRobot, and configure alerts for your health check endpoint. Each platform has built-in metrics, but an external monitor catches platform-level outages that internal monitoring misses.
Pro tip: Set your Node.js app's
--max-old-space-sizeto 75% of your container's available RAM. On a 4 GB instance, that'sNODE_OPTIONS="--max-old-space-size=3072". This prevents the V8 garbage collector from thrashing and gives the OS enough headroom for file descriptors and network buffers.
Performance Benchmarks
I ran the same Express.js API (Node 22.11, 3 routes, Postgres via Prisma) on each platform's comparable tier and measured p50/p99 latency and max requests per second using wrk from a US East client.
| Platform | p50 Latency | p99 Latency | Max RPS | Cold Start |
|---|---|---|---|---|
| AWS EC2 (t4g.medium) | 4ms | 18ms | 4,200 | N/A (always on) |
| Render (Standard) | 6ms | 32ms | 3,100 | ~15s (from sleep) |
| Railway (2 vCPU) | 5ms | 28ms | 3,400 | ~8s |
| Fly.io (performance-2x, IAD) | 3ms | 14ms | 4,500 | ~3s (Firecracker boot) |
| DigitalOcean App Platform | 7ms | 35ms | 2,800 | ~20s |
| Hetzner (CPX31, Ashburn) | 4ms | 16ms | 5,800 | N/A (always on) |
| Google Cloud Run | 8ms | 45ms | 2,500 | ~4s |
Hetzner wins on raw throughput because you get more CPU for less money. Fly.io delivers the lowest latency thanks to Firecracker micro-VMs and edge proximity. Cloud Run's numbers suffer from request-based scaling overhead -- it's optimized for cost efficiency on bursty traffic, not sustained throughput.
Key Features Comparison
| Feature | AWS EC2 | Render | Railway | Fly.io | DigitalOcean | Hetzner |
|---|---|---|---|---|---|---|
| Auto-scaling | Yes (ASG) | Yes | Yes | Yes | Yes | No (manual) |
| Zero-downtime deploy | Yes | Yes | Yes | Yes | Yes | DIY |
| WebSocket support | Yes | Yes | Yes | Yes | Yes | Yes |
| Managed Postgres | RDS ($) | Built-in | Built-in | Built-in | Built-in | No |
| Managed Redis | ElastiCache ($) | Built-in | Built-in | Built-in | Built-in | No |
| Custom domains | Yes | Yes | Yes | Yes | Yes | Yes |
| Multi-region | Yes | No | No | Yes (35+) | Limited | 5 regions |
| SSH access | Yes | No | No | Yes | Yes (Droplet) | Yes |
| Preview environments | No | Yes | Yes | No | No | No |
Frequently Asked Questions
Which hosting platform is best for a Node.js beginner?
Render or Railway. Both offer git-push deployments, automatic HTTPS, and managed databases with minimal configuration. Render's free tier lets you test without a credit card. Railway's $5/month free credit covers light usage. Start with either one and don't overthink it -- you can migrate later without rewriting your app.
Is AWS overkill for a small Node.js project?
Yes, for most small projects. AWS's strength is its ecosystem breadth, not its simplicity. If you're a solo developer or small team, you'll spend more time configuring IAM roles, VPCs, and security groups than writing features. Use AWS when you need specific services (SQS, Lambda, Cognito) or when your company mandates it. Otherwise, a PaaS saves dozens of hours.
How do I handle WebSockets on serverless platforms?
Most serverless platforms (Cloud Run, AWS Lambda) don't natively support long-lived WebSocket connections. Cloud Run supports WebSockets but terminates connections at the configured timeout (default 300 seconds). For persistent WebSocket needs, use Fly.io (best support), Railway, or Render -- all three handle WebSockets on standard plans. Alternatively, offload real-time to a managed service like Ably or Pusher and keep your API on serverless.
What about Vercel or Netlify for Node.js backends?
Vercel and Netlify are frontend-first platforms. Their serverless functions work for lightweight API routes, but they have strict execution time limits (Vercel: 60s on Pro, Netlify: 26s), no persistent connections, and cold starts on every invocation. For a real backend with background jobs, WebSockets, or long-running processes, use a dedicated backend platform. Vercel's own docs recommend pairing it with a separate backend service for anything beyond simple API routes.
How do I reduce cold start times on Node.js?
Cold starts matter on auto-scaling and serverless platforms. Three concrete steps: (1) minimize your Docker image -- use node:22-alpine as the base and multi-stage builds to keep images under 150 MB; (2) lazy-load heavy modules -- don't import your entire ORM at startup if only one route uses it; (3) set minimum instances to 1 on platforms that support it (Fly.io min_machines_running, Cloud Run min-instances). This eliminates cold starts for the first request at the cost of a few dollars per month.
Is Hetzner reliable enough for production?
Yes, with caveats. Hetzner has been operating since 1997 and runs its own data centers. Their uptime track record is solid -- comparable to DigitalOcean. The limitation is that Hetzner is pure IaaS: no managed load balancers (they have one, but it's basic), no managed databases, no built-in CI/CD. You'll need to set up your own deployment pipeline (Docker + GitHub Actions), manage TLS certificates (Caddy or Certbot), and handle backups. If your team is comfortable with that, Hetzner's price-to-performance ratio is unmatched.
Can I migrate between platforms without rewriting my app?
If your Node.js app follows 12-factor principles -- configuration via environment variables, stateless processes, port binding via PORT env var -- migrating is straightforward. The app code doesn't change. You update the platform-specific config file (render.yaml, fly.toml, railway.toml), set your environment variables, and deploy. Database migration is the harder part: use pg_dump/pg_restore for Postgres or a tool like pgloader for zero-downtime migration. Plan for 1-2 hours of migration work for a typical backend.
The Right Platform Depends on Your Stage
There's no universal "best" hosting for Node.js. Here's my opinionated recommendation based on 10 years of shipping backends:
- Side project or MVP: Railway or Render free tier. Ship fast, validate the idea, worry about infrastructure later.
- Early-stage SaaS (under $10K MRR): Render Standard or DigitalOcean App Platform. Predictable pricing, managed Postgres, zero DevOps overhead.
- Scaling startup ($10K-100K MRR): Fly.io for latency-sensitive APIs, AWS for ecosystem integration, or Hetzner + Docker if you want to maximize margin.
- High-traffic production (100K+ MRR): AWS or GCP with Kubernetes. At this scale, the operational complexity is justified by cost control, multi-region redundancy, and compliance requirements.
Start simple. Every hour you spend on infrastructure at the MVP stage is an hour you didn't spend talking to users. When your traffic outgrows your platform, you'll have the revenue to justify the migration effort. The platforms in this guide all support Node.js 22+, handle thousands of requests per second on their mid-tier plans, and have battle-tested deployment pipelines. Pick the one that matches your team's operational maturity and ship.
Written by
Abhishek Patel
Infrastructure engineer with 10+ years building production systems on AWS, GCP, and bare metal. Writes practical guides on cloud architecture, containers, networking, and Linux for developers who want to understand how things actually work under the hood.
Related Articles
Top Backend Platforms for SaaS (2026)
Compares backend platforms for SaaS including AWS, Supabase, Firebase, Railway, Render, and bare VPS with real pricing at three scales, multi-tenancy patterns, and build-vs-buy decisions for auth, payments, email, and background jobs.
13 min read
BackendAWS vs Firebase vs Supabase: Backend Platform Comparison (2026)
A comprehensive comparison of AWS, Firebase, and Supabase covering authentication, databases, real-time sync, file storage, serverless functions, and pricing for three app profiles.
12 min read
BackendGraphQL vs REST: When to Use Each
A practical comparison of GraphQL and REST APIs with performance benchmarks, code examples, and clear decision criteria for choosing the right API architecture in 2026.
12 min read
Enjoyed this article?
Get more like this in your inbox. No spam, unsubscribe anytime.