Deployment
This guide covers everything you need to take a Remix V3 application from development to production. It builds on the tutorial deployment chapter with more detail on each topic.
Building for Production
During development, tsx compiles TypeScript on the fly. For production, compile once ahead of time:
{
"scripts": {
"dev": "tsx watch server.ts",
"build": "tsc",
"start": "node dist/server.js"
}
}Update your tsconfig.json to emit JavaScript:
{
"compilerOptions": {
"strict": true,
"lib": ["ES2024", "DOM", "DOM.Iterable"],
"module": "ES2022",
"moduleResolution": "Bundler",
"target": "ES2022",
"outDir": "dist",
"rootDir": ".",
"declaration": true,
"sourceMap": true
},
"include": ["app/**/*", "server.ts"]
}npm run build
npm run startSource maps in production
The sourceMap: true option generates .map files alongside your JavaScript. These make error stack traces point to your TypeScript source instead of compiled output. Keep them in production but do not serve them to clients.
Environment Configuration
Use environment variables for all configuration that changes between environments:
// app/config.ts
export let config = {
port: Number(process.env.PORT ?? 3000),
nodeEnv: process.env.NODE_ENV ?? 'development',
sessionSecret: process.env.SESSION_SECRET!,
databaseUrl: process.env.DATABASE_URL!,
redisUrl: process.env.REDIS_URL,
// OAuth
googleClientId: process.env.GOOGLE_CLIENT_ID,
googleClientSecret: process.env.GOOGLE_CLIENT_SECRET,
// S3
s3Bucket: process.env.S3_BUCKET,
s3Region: process.env.S3_REGION,
awsAccessKeyId: process.env.AWS_ACCESS_KEY_ID,
awsSecretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
}
// Validate required variables at startup
if (!config.sessionSecret) {
throw new Error('SESSION_SECRET environment variable is required')
}
if (!config.databaseUrl) {
throw new Error('DATABASE_URL environment variable is required')
}Create a .env.example file (committed to git) that documents all variables:
# .env.example -- Copy to .env and fill in values
NODE_ENV=development
PORT=3000
SESSION_SECRET=change-me-to-a-long-random-string
DATABASE_URL=./db/app.sqlite
REDIS_URL=redis://localhost:6379Validate at startup
Check for required environment variables when your application starts, not when they are first used. Failing fast with a clear error message is better than crashing at 3 AM when someone tries to log in.
Node.js Deployment with PM2
PM2 is a process manager that keeps your application running, restarts it on crashes, and manages logs.
Installation
npm install -g pm2Starting Your Application
# Start with a name
pm2 start dist/server.js --name my-app
# Start with environment variables
pm2 start dist/server.js --name my-app --env production
# Start multiple instances (one per CPU core)
pm2 start dist/server.js --name my-app -i maxEcosystem File
For repeatable deployments, create an ecosystem.config.cjs:
module.exports = {
apps: [
{
name: 'my-app',
script: 'dist/server.js',
instances: 'max',
exec_mode: 'cluster',
env: {
NODE_ENV: 'production',
PORT: 3000,
},
max_memory_restart: '500M',
log_date_format: 'YYYY-MM-DD HH:mm:ss',
error_file: './logs/error.log',
out_file: './logs/output.log',
merge_logs: true,
},
],
}pm2 start ecosystem.config.cjsPM2 Commands
pm2 status # Show running processes
pm2 logs my-app # View logs
pm2 restart my-app # Restart
pm2 reload my-app # Zero-downtime reload (cluster mode)
pm2 stop my-app # Stop
pm2 delete my-app # Remove from PM2
pm2 save # Save process list (auto-start on reboot)
pm2 startup # Generate startup scriptZero-downtime deploys with cluster mode
When running in cluster mode (-i max), pm2 reload restarts instances one at a time. Each new instance starts and begins accepting requests before the next old instance is stopped. This means zero downtime during deployments.
Docker Deployment
Multi-Stage Dockerfile
# Stage 1: Build
FROM node:24-slim AS build
WORKDIR /app
# Install dependencies first (cached layer)
COPY package.json package-lock.json ./
RUN npm ci
# Copy source and build
COPY tsconfig.json ./
COPY app/ ./app/
COPY server.ts ./
RUN npm run build
# Stage 2: Production
FROM node:24-slim
WORKDIR /app
# Install production dependencies only
COPY package.json package-lock.json ./
RUN npm ci --omit=dev
# Copy compiled application
COPY --from=build /app/dist ./dist
# Copy static assets and migrations
COPY public/ ./public/
COPY db/migrations/ ./db/migrations/
# Create directories for runtime data
RUN mkdir -p data logs uploads
# Non-root user for security
RUN addgroup --system app && adduser --system --ingroup app app
RUN chown -R app:app /app
USER app
ENV NODE_ENV=production
ENV PORT=3000
EXPOSE 3000
# Run migrations then start the server
CMD ["sh", "-c", "node dist/migrate.js && node dist/server.js"].dockerignore
node_modules
.env
.env.*
.git
.gitignore
*.md
db/*.sqlite
sessions/
uploads/
logs/
dist/Building and Running
# Build the image
docker build -t my-app .
# Run with environment variables
docker run -d \
--name my-app \
-p 3000:3000 \
-e SESSION_SECRET=your-secret \
-e DATABASE_URL=postgres://user:pass@host:5432/mydb \
-e REDIS_URL=redis://host:6379 \
-v ./uploads:/app/uploads \
my-appDocker Compose
For local development with dependencies:
# docker-compose.yml
services:
app:
build: .
ports:
- "3000:3000"
environment:
- NODE_ENV=production
- SESSION_SECRET=dev-secret
- DATABASE_URL=postgres://app:password@db:5432/myapp
- REDIS_URL=redis://redis:6379
volumes:
- uploads:/app/uploads
depends_on:
- db
- redis
db:
image: postgres:16
environment:
POSTGRES_USER: app
POSTGRES_PASSWORD: password
POSTGRES_DB: myapp
volumes:
- pgdata:/var/lib/postgresql/data
ports:
- "5432:5432"
redis:
image: redis:7-alpine
ports:
- "6379:6379"
volumes:
pgdata:
uploads:docker compose up -dDatabase Setup for Production
PostgreSQL
PostgreSQL is the recommended database for production. It handles concurrent connections, has robust backup tools, and scales well.
import { Pool } from 'pg'
import { createPostgresDatabaseAdapter } from 'remix/data-table-postgres'
import { createDatabase } from 'remix/data-table'
let pool = new Pool({
connectionString: process.env.DATABASE_URL,
max: 20, // Max connections in pool
idleTimeoutMillis: 30000, // Close idle connections after 30s
connectionTimeoutMillis: 5000, // Fail if connection takes > 5s
ssl: process.env.NODE_ENV === 'production'
? { rejectUnauthorized: false }
: false,
})
let adapter = createPostgresDatabaseAdapter(pool)
let db = createDatabase(adapter)Connection Pooling
For high-traffic applications, use an external connection pooler like PgBouncer:
Application (20 connections) -> PgBouncer (5 connections) -> PostgreSQLThis is important when running multiple application instances, as each instance opens its own pool. Without a pooler, 4 instances with 20 connections each would open 80 connections to PostgreSQL.
Running Migrations
Create a standalone migration script that runs before the application starts:
// migrate.ts
import { Pool } from 'pg'
import { createPostgresDatabaseAdapter } from 'remix/data-table-postgres'
import { createMigrationRunner } from 'remix/data-table/migrations'
import { loadMigrations } from 'remix/data-table/migrations/node'
let pool = new Pool({ connectionString: process.env.DATABASE_URL })
let adapter = createPostgresDatabaseAdapter(pool)
let migrations = await loadMigrations('./db/migrations')
let runner = createMigrationRunner(adapter, migrations)
await runner.up()
console.log('Migrations applied successfully.')
await pool.end()Run it as part of your deployment:
node dist/migrate.js && node dist/server.jsRun migrations separately from the application
Do not run migrations inside the application startup. If you run multiple instances, they might try to run migrations simultaneously. Use a separate migration step that runs once before starting any instances.
Session Storage for Production
Redis
Redis is the recommended session storage for production:
import { createRedisSessionStorage } from 'remix/session-storage-redis'
let sessionStorage = createRedisSessionStorage({
url: process.env.REDIS_URL!,
})Redis is fast (sub-millisecond reads), supports automatic expiration (sessions clean up themselves), and is accessible from multiple application instances.
Memcache
An alternative to Redis:
import { createMemcacheSessionStorage } from 'remix/session-storage-memcache'
let sessionStorage = createMemcacheSessionStorage({
servers: [process.env.MEMCACHE_URL!],
})Redis vs Memcache for sessions
Both work well for sessions. Redis is more popular, has more features (persistence, pub/sub), and is the safer choice. Memcache is simpler and slightly faster for pure key-value lookups.
Static Assets and Caching
Serving Static Files
import { staticFiles } from 'remix/static-middleware'
let router = createRouter({
middleware: [
staticFiles('./public', {
maxAge: 60 * 60 * 24 * 365, // 1 year
immutable: true,
}),
],
})Reverse Proxy for Static Files
For best performance, let Nginx serve static files directly:
server {
listen 443 ssl http2;
server_name myapp.com;
# Serve static files directly
location /static/ {
alias /app/public/;
expires 1y;
add_header Cache-Control "public, immutable";
}
# Proxy everything else to the application
location / {
proxy_pass http://127.0.0.1:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}Compression
Add response compression to reduce bandwidth:
import { compression } from 'remix/compression-middleware'
let router = createRouter({
middleware: [
compression(),
// ... other middleware
],
})This automatically compresses responses with gzip or brotli based on the client's Accept-Encoding header. Alternatively, let Nginx handle compression:
gzip on;
gzip_types text/plain text/css application/json application/javascript text/xml;
gzip_min_length 1000;HTTPS and Reverse Proxy
Nginx Configuration
# /etc/nginx/sites-available/myapp
upstream app {
server 127.0.0.1:3000;
server 127.0.0.1:3001; # If running multiple instances
}
server {
listen 443 ssl http2;
server_name myapp.com;
ssl_certificate /etc/letsencrypt/live/myapp.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/myapp.com/privkey.pem;
# Security headers
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
add_header X-Frame-Options "DENY" always;
add_header X-Content-Type-Options "nosniff" always;
# Static files
location /public/ {
alias /app/public/;
expires 1y;
add_header Cache-Control "public, immutable";
}
# Application
location / {
proxy_pass http://app;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_cache_bypass $http_upgrade;
# Timeouts
proxy_connect_timeout 60s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
}
# Upload size limit
client_max_body_size 50M;
}
server {
listen 80;
server_name myapp.com;
return 301 https://$host$request_uri;
}Let's Encrypt
Get free TLS certificates:
sudo apt install certbot python3-certbot-nginx
sudo certbot --nginx -d myapp.comCertbot automatically configures Nginx and sets up certificate renewal.
Health Checks
Add a health check endpoint for load balancers and monitoring:
import { get } from 'remix/fetch-router/routes'
let healthRoute = get('/health')
router.map(healthRoute, async () => {
// Optionally check dependencies
try {
await db.execute(sql`SELECT 1`)
} catch {
return new Response(JSON.stringify({ status: 'unhealthy', database: 'down' }), {
status: 503,
headers: { 'Content-Type': 'application/json' },
})
}
return new Response(JSON.stringify({
status: 'healthy',
uptime: process.uptime(),
timestamp: new Date().toISOString(),
}), {
headers: { 'Content-Type': 'application/json' },
})
})Keep health checks fast
Health check endpoints are called frequently (every few seconds). Do not do expensive operations. A simple database ping is enough to verify connectivity.
Logging and Monitoring
Structured Logging
Use structured (JSON) logging in production for easy parsing by log aggregation tools:
function log(level: string, message: string, data?: Record<string, unknown>) {
console.log(JSON.stringify({
level,
message,
timestamp: new Date().toISOString(),
...data,
}))
}
// In a middleware
function requestLogger(context: any, next: () => Promise<Response>) {
let start = Date.now()
let response = await next()
let duration = Date.now() - start
log('info', 'request', {
method: context.request.method,
path: new URL(context.request.url).pathname,
status: response.status,
duration,
})
return response
}Error Tracking
Log unhandled errors with context:
router.onError((error, context) => {
log('error', 'unhandled error', {
error: error.message,
stack: error.stack,
method: context.request.method,
path: new URL(context.request.url).pathname,
})
return new Response('Internal Server Error', { status: 500 })
})Horizontal Scaling
Multiple Instances
Run multiple application instances behind a load balancer:
# PM2 cluster mode
pm2 start dist/server.js -i max
# Docker with multiple containers
docker compose up --scale app=4Requirements for horizontal scaling:
- Sessions in Redis -- Not filesystem storage
- File uploads in S3 -- Not local filesystem
- Stateless handlers -- No in-memory state that varies between instances
- Database with connection pooling -- PostgreSQL or MySQL
Load Balancing
Nginx distributes requests across instances:
upstream app {
least_conn; # Send to the instance with fewest connections
server 127.0.0.1:3000;
server 127.0.0.1:3001;
server 127.0.0.1:3002;
server 127.0.0.1:3003;
}Alternative Runtimes
Remix V3 is built on web-standard APIs, so it runs on multiple JavaScript runtimes.
Bun
Bun is a drop-in replacement for Node.js with faster startup and built-in TypeScript support:
# No build step needed -- Bun runs TypeScript directly
bun run server.ts
# Or run compiled JavaScript
bun run dist/server.jsBun is compatible with most Node.js packages. If your application uses only web-standard APIs and npm packages, switching is usually seamless.
Deno
Deno has built-in TypeScript support and a security-first model:
deno run --allow-net --allow-read --allow-env --allow-write server.tsCloud Platforms
Railway
# Install Railway CLI
npm install -g @railway/cli
# Login and initialize
railway login
railway init
# Deploy
railway upSet environment variables in the Railway dashboard. Railway automatically detects Node.js applications and runs npm start.
Fly.io
Create a fly.toml:
app = "my-remix-app"
primary_region = "iad"
[build]
dockerfile = "Dockerfile"
[env]
NODE_ENV = "production"
PORT = "3000"
[http_service]
internal_port = 3000
force_https = true
auto_stop_machines = true
auto_start_machines = true
[[vm]]
cpu_kind = "shared"
cpus = 1
memory_mb = 512fly launch
fly secrets set SESSION_SECRET=your-secret DATABASE_URL=your-db-url
fly deployRender
Create a render.yaml:
services:
- type: web
name: my-remix-app
env: node
buildCommand: npm ci && npm run build
startCommand: node dist/migrate.js && node dist/server.js
envVars:
- key: NODE_ENV
value: production
- key: SESSION_SECRET
generateValue: true
- key: DATABASE_URL
fromDatabase:
name: mydb
property: connectionString
databases:
- name: mydb
plan: starterProduction Checklist
Before Deploying
- [ ]
npm run buildsucceeds without errors - [ ] All tests pass (
npx remix test) - [ ] Environment variables documented in
.env.example - [ ]
.envis in.gitignore - [ ] Database migrations are up to date
Infrastructure
- [ ] HTTPS enabled (Let's Encrypt or provider-managed)
- [ ] Reverse proxy configured (Nginx, Caddy, or provider-managed)
- [ ] Database is PostgreSQL or MySQL (not SQLite for multi-instance)
- [ ] Sessions stored in Redis (not filesystem)
- [ ] File uploads stored in S3 (not local filesystem)
Performance
- [ ] Response compression enabled
- [ ] Static assets have long cache headers
- [ ] Database connections use pooling
- [ ] Health check endpoint exists
Reliability
- [ ] Process manager configured (PM2 or container orchestration)
- [ ] Logging outputs structured JSON
- [ ] Error tracking configured
- [ ] Database backups scheduled
- [ ] Migrations run as a separate step before starting
Security
- [ ]
NODE_ENV=productionset - [ ] Strong session secret (32+ characters)
- [ ] Cookies configured with
httpOnly,secure,sameSite - [ ] CSRF protection enabled
- [ ] Rate limiting on auth endpoints
Related
- Tutorial: Deployment -- Step-by-step first deployment
- Security Guide -- Production security hardening
- Database Guide -- Production database configuration