Deployment
Build
npm run build
node build That’s it. The adapter bundles everything into a standalone Node.js server - your SvelteKit app, WebSocket handler, and all $live/ modules in one process.
Docker
uWebSockets.js is a native C++ addon, so your Docker image needs glibc >= 2.38. Build inside the container to be safe.
FROM node:22-trixie-slim AS build
# git is required - uWebSockets.js is installed from GitHub, not npm
RUN apt-get update && apt-get install -y --no-install-recommends git && rm -rf /var/lib/apt/lists/*
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
# Runtime stage - no git needed
FROM node:22-trixie-slim
WORKDIR /app
COPY --from=build /app/build build/
COPY --from=build /app/node_modules node_modules/
COPY package.json .
EXPOSE 3000
CMD ["node", "build"] Important: Use Debian Trixie or Ubuntu 24.04+ based images. Bookworm-based images (
node:*-slim,node:*-bookworm) ship glibc 2.36 which is too old. Don’t use Alpine - uWebSockets.js requires glibc, not musl.
Environment variables
| Variable | Default | Description |
|---|---|---|
PORT | 3000 | Server port |
HOST | 0.0.0.0 | Bind address |
ORIGIN | (derived) | Public URL (e.g. https://myapp.com) |
SSL_CERT | - | Path to TLS certificate |
SSL_KEY | - | Path to TLS private key |
SHUTDOWN_TIMEOUT | 30 | Graceful shutdown wait in seconds |
CLUSTER_WORKERS | - | Worker threads (auto for CPU count) |
# Behind nginx
ORIGIN=https://myapp.com PORT=8080 node build
# Native TLS
SSL_CERT=./cert.pem SSL_KEY=./key.pem PORT=443 node build
# Multi-core
CLUSTER_WORKERS=auto node build TLS
svelte-adapter-uws handles TLS natively via uWebSockets.js SSLApp. No Nginx or Caddy needed:
SSL_CERT=/path/to/cert.pem SSL_KEY=/path/to/key.pem node build The client store automatically uses wss:// when the page is served over HTTPS.
Clustering
The adapter supports multi-core scaling. Each worker handles connections independently, and platform.publish() is automatically relayed across all workers.
CLUSTER_WORKERS=auto node build Two modes (auto-detected):
- reuseport (Linux) - each worker binds to the same port via
SO_REUSEPORT. No single-threaded bottleneck. - acceptor (macOS/Windows) - a primary thread distributes connections to workers.
Docker replicas vs CLUSTER_WORKERS
If you have external pub/sub (Redis, Postgres LISTEN/NOTIFY) handling cross-process messaging, you don’t need CLUSTER_WORKERS. Just run multiple replicas:
# docker-compose.yml
services:
app:
build: .
command: node build
network_mode: host
environment:
- PORT=443
- SSL_CERT=/certs/cert.pem
- SSL_KEY=/certs/key.pem
deploy:
replicas: 4 On Linux, SO_REUSEPORT lets multiple processes bind to the same port. The kernel distributes connections across them.
| Approach | When to use |
|---|---|
CLUSTER_WORKERS | Single-machine, no Docker/k8s managing processes |
| Docker replicas | Production with infrastructure managing processes + external pub/sub |
Cross-worker safety
| Method | Cross-worker? | Safe in live()? |
|---|---|---|
ctx.publish() | Yes (relayed) | Yes |
ctx.platform.send() | N/A (single ws) | Yes |
ctx.platform.sendTo() | No (local only) | Use with caution |
ctx.platform.subscribers() | No (local only) | Use with caution |
ctx.platform.connections | No (local only) | Use with caution |
ctx.publish() is always safe - it relays across workers and, with Redis wrapping, across instances. For targeted messaging, prefer publish() with a user-specific topic over sendTo().
OS tuning
Linux defaults are conservative. For deployments expecting more than a few hundred concurrent WebSocket connections, apply these settings.
Kernel parameters
Add to /etc/sysctl.conf and run sysctl -p:
net.ipv4.tcp_max_syn_backlog = 4096
net.ipv4.tcp_tw_reuse = 1
net.core.somaxconn = 4096
fs.file-max = 1024000
net.netfilter.nf_conntrack_max = 262144
net.ipv4.tcp_fastopen = 3
net.ipv4.tcp_defer_accept = 5 - TCP Fast Open (
tcp_fastopen = 3) - saves 1 RTT on reconnecting clients - TCP Defer Accept (
tcp_defer_accept = 5) - ignores port scanners and half-open probes at the kernel level
File descriptor limits
Each WebSocket connection uses one file descriptor. The default limit (1024) caps you at roughly 1000 concurrent connections regardless of CPU or memory.
Add to /etc/security/limits.conf:
* soft nofile 1024000
* hard nofile 1024000
root soft nofile 1024000
root hard nofile 1024000 The * wildcard doesn’t apply to root on most distributions. If the app runs as root (common in Docker), the explicit root lines are required.
Docker ulimits
services:
app:
ulimits:
nofile:
soft: 65536
hard: 65536 Without this, each container is limited to 1024 file descriptors.
Connection management
uWebSockets.js manages connections at the C++ level:
- HTTP keepalive - idle connections close after 10 seconds (compiled into C++, not configurable)
- Slow-loris protection - connections slower than 16 KB/second are dropped before reaching your code
- WebSocket ping/pong - automatic with
idleTimeout(default 120 seconds). The client store handles pong automatically
Stress testing
Don’t stress test from your local machine against a remote server - your home router’s NAT table (1024-4096 entries) will fill up, dropping ALL new connections including SSH.
Symptoms: connection ceiling stuck around 1200-1900, SSH times out, other devices lose internet, server CPU barely loaded.
Run stress tests from the server itself (localhost to localhost) or from a machine on the same network.
Production checklist
- Set
ORIGINto your public URL - Configure TLS (native or reverse proxy)
- Raise
ulimits.nofileto 65536+ - Apply kernel parameters if expecting 1000+ connections
- Set
CLUSTER_WORKERS=autoor use Docker replicas - Use
restart: unless-stoppedin Docker Compose - Monitor with
platform.connectionsandprocess.memoryUsage() - Handle
sveltekit:shutdownfor graceful cleanup
For the full adapter-level reference, see Adapter Deployment.
Redis multi-instance
Use createMessage with the Redis pub/sub bus for multi-instance deployments. ctx.publish automatically goes through Redis when the platform is wrapped.
// src/hooks.ws.js
import { createMessage } from 'svelte-realtime/server';
import { createRedis, createPubSubBus } from 'svelte-adapter-uws-extensions/redis';
const redis = createRedis();
const bus = createPubSubBus(redis);
export function open(ws, { platform }) {
bus.activate(platform);
}
export function upgrade({ cookies }) {
return validateSession(cookies.session_id) || false;
}
export const message = createMessage({ platform: (p) => bus.wrap(p) }); No changes needed in your live modules. ctx.publish delegates to whatever platform was passed in, so Redis wrapping is transparent.
Combined: Redis + rate limiting
import { createMessage, LiveError } from 'svelte-realtime/server';
import { createRedis, createPubSubBus, createRateLimit } from 'svelte-adapter-uws-extensions/redis';
const redis = createRedis();
const bus = createPubSubBus(redis);
const limiter = createRateLimit(redis, { points: 30, interval: 10000 });
export function open(ws, { platform }) { bus.activate(platform); }
export function upgrade({ cookies }) { return validateSession(cookies.session_id) || false; }
export const message = createMessage({
platform: (p) => bus.wrap(p),
async beforeExecute(ws, rpcPath) {
const { allowed, resetMs } = await limiter.consume(ws);
if (!allowed)
throw new LiveError('RATE_LIMITED', `Retry in ${Math.ceil(resetMs / 1000)}s`);
}
}); Postgres NOTIFY
Combine live.stream with the Postgres NOTIFY bridge for zero-code reactivity. A database trigger fires pg_notify(), the bridge calls platform.publish(), and the stream auto-updates.
// src/hooks.ws.js
export { message } from 'svelte-realtime/server';
import { createPgClient, createNotifyBridge } from 'svelte-adapter-uws-extensions/postgres';
const pg = createPgClient({ connectionString: process.env.DATABASE_URL });
const notify = createNotifyBridge(pg, {
channel: 'table_changes',
parse: (payload) => JSON.parse(payload)
});
export function open(ws, { platform }) {
notify.activate(platform);
} // src/live/orders.js - no ctx.publish needed, the DB trigger handles it
export const createOrder = live(async (ctx, items) => {
return db.orders.insert({ userId: ctx.user.id, items });
});
export const orders = live.stream('orders', async (ctx) => {
return db.orders.forUser(ctx.user.id);
}, { merge: 'crud', key: 'id' }); Limits and gotchas
| Limit | Default | Notes |
|---|---|---|
maxPayloadLength | 16 KB | RPC requests exceeding this close the connection silently. Increase in adapter websocket config for large payloads |
maxBackpressure | 1 MB | Messages silently dropped when send buffer exceeds this |
sendQueue cap | 1000 | Client-side offline queue drops oldest when exceeded |
batch() size | 50 | Client rejects before sending if exceeded. Server enforces same limit |
ws.subscribe() vs the subscribe hook
live.stream() calls ws.subscribe(topic) server-side, bypassing the adapter’s subscribe hook entirely. Stream topics are gated by guard(), not the subscribe hook.
Was this page helpful?