Configuration
Adapter options
adapter({
out: 'build',
precompress: true,
envPrefix: '',
healthCheckPath: '/healthz',
websocket: true // or false, or an options object
}) | Option | Default | Description |
|---|---|---|
out | 'build' | Output directory |
precompress | true | Generate brotli/gzip for static files |
envPrefix | '' | Prefix for environment variables |
healthCheckPath | '/healthz' | Health check endpoint (set to false to disable) |
websocket | false | Enable WebSocket support (true, false, or options object) |
WebSocket options
Pass an object instead of true for fine-grained control:
adapter({
websocket: {
path: '/ws',
handler: './src/lib/server/websocket.js',
maxPayloadLength: 16 * 1024,
idleTimeout: 120,
maxBackpressure: 1024 * 1024,
compression: false,
sendPingsAutomatically: true,
upgradeTimeout: 10,
upgradeRateLimit: 10,
upgradeRateLimitWindow: 10,
allowedOrigins: 'same-origin'
}
}) | Option | Default | Description |
|---|---|---|
path | '/ws' | WebSocket endpoint path |
handler | auto-discover | Path to custom handler module. Auto-discovers src/hooks.ws.js if omitted |
maxPayloadLength | 16384 | Max message size in bytes. Connections sending larger messages are closed |
idleTimeout | 120 | Seconds of inactivity before the connection is closed |
maxBackpressure | 1048576 | Max bytes of backpressure per connection (1 MB). Lower for many slow consumers |
compression | false | Per-message deflate compression |
sendPingsAutomatically | true | Keep-alive pings |
upgradeTimeout | 10 | Seconds before an async upgrade handler is rejected with 504. Set to 0 to disable |
upgradeRateLimit | 10 | Max WebSocket upgrade requests per IP per window. Prevents connection flood attacks |
upgradeRateLimitWindow | 10 | Sliding window size in seconds |
allowedOrigins | 'same-origin' | 'same-origin', '*', or an array of allowed origins |
Environment variables
All set at runtime (node build), not build time. If you set envPrefix: 'MY_APP_', all variables are prefixed (e.g. MY_APP_PORT).
| Variable | Default | Description |
|---|---|---|
HOST | 0.0.0.0 | Bind address |
PORT | 3000 | Listen port |
ORIGIN | (derived) | Fixed origin (e.g. https://example.com) |
SSL_CERT | - | Path to TLS certificate |
SSL_KEY | - | Path to TLS private key |
PROTOCOL_HEADER | - | Protocol detection header (e.g. x-forwarded-proto) |
HOST_HEADER | - | Host detection header (e.g. x-forwarded-host) |
PORT_HEADER | - | Port override header (e.g. x-forwarded-port) |
ADDRESS_HEADER | - | Client IP header (e.g. x-forwarded-for) |
XFF_DEPTH | 1 | Position from right in X-Forwarded-For |
BODY_SIZE_LIMIT | 512K | Max request body (supports K, M, G suffixes) |
SHUTDOWN_TIMEOUT | 30 | Graceful shutdown wait in seconds |
CLUSTER_WORKERS | - | Worker threads (auto for CPU count) |
CLUSTER_MODE | (auto) | reuseport (Linux) or acceptor (other platforms) |
WS_DEBUG | - | Set to 1 for structured WebSocket debug logging |
Important:
PROTOCOL_HEADER,HOST_HEADER,PORT_HEADER, andADDRESS_HEADERare trusted verbatim. Only set these behind a reverse proxy that overwrites them on every request. If the server is directly internet-facing, use a fixedORIGINinstead.
Examples
# Simple HTTP
node build
# Custom port
PORT=8080 node build
# Behind nginx
ORIGIN=https://example.com node build
# Behind a proxy with forwarded headers
PROTOCOL_HEADER=x-forwarded-proto HOST_HEADER=x-forwarded-host node build
# Native TLS
SSL_CERT=./cert.pem SSL_KEY=./key.pem node build
# Everything at once
SSL_CERT=./cert.pem SSL_KEY=./key.pem PORT=443 BODY_SIZE_LIMIT=10M SHUTDOWN_TIMEOUT=60 node build TypeScript setup
Add the platform type to src/app.d.ts:
import type { Platform as AdapterPlatform } from 'svelte-adapter-uws';
declare global {
namespace App {
interface Platform extends AdapterPlatform {}
}
}
export {}; Now event.platform.publish(), event.platform.topic(), etc. are fully typed.
Vite plugin
The Vite plugin is required when using WebSockets. It does two things:
- Dev mode - spins up a
wsWebSocket server alongside Vite’s dev server, soevent.platformand the client store work identically to production - Production builds - runs your
hooks.wsfile through Vite’s pipeline so$lib,$env, and$appimports resolve correctly
Without it, event.platform won’t work in dev, and your hooks.ws file won’t be able to import from $lib or use $env variables.
// vite.config.js
import { sveltekit } from '@sveltejs/kit/vite';
import uws from 'svelte-adapter-uws/vite';
export default {
plugins: [sveltekit(), uws()]
}; Changes to your hooks.ws file are picked up automatically - the plugin reloads the handler on save and closes existing connections so they reconnect with the new code. No dev server restart needed.
Note: The dev server does not enforce
allowedOrigins. Origin checks only run in production. A warning is logged at startup as a reminder.
Health check endpoint
The adapter exposes a health check at /healthz by default. Set healthCheckPath to a different path or false to disable:
adapter({
healthCheckPath: '/healthz' // default
// healthCheckPath: false // disable
}) Backpressure and connection limits
These options control how the server handles misbehaving or slow clients at the WebSocket level:
maxPayloadLength (default: 16 KB) - the maximum size of a single incoming WebSocket message. If a client sends a message larger than this, uWS closes the connection immediately (not just the message - the entire connection is dropped). Set this based on the largest message your application expects to receive.
maxBackpressure (default: 1 MB) - the per-connection outbound send buffer. When a client reads slower than the server writes, messages queue up in this buffer. Once it overflows, subsequent send() and publish() calls for that connection silently drop the message. The drain hook fires when the buffer empties again. Lower this if you expect many slow consumers to avoid per-connection memory bloat.
upgradeRateLimit (default: 10 per 10s window) - sliding-window rate limit on WebSocket upgrade requests per client IP. Clients exceeding the limit get a 429 Too Many Requests response. The IP rate map is capped at 10,000 entries with LRU eviction by activity score, so sustained connection floods from many IPs don’t cause unbounded memory growth.
Static file behavior
All static assets (from the client/ and prerendered/ output directories) are loaded once at startup and served directly from RAM. Each response automatically includes:
Content-Type- detected from the file extensionVary: Accept-Encoding- required for correct CDN/proxy cachingAccept-Ranges: bytes- enables partial content requestsX-Content-Type-Options: nosniff- prevents MIME-type sniffingETag- derived from modification time and size; enables304 Not ModifiedCache-Control: public, max-age=31536000, immutable- for versioned assets under/_app/immutable/Cache-Control: no-cache- for all other assets (forces ETag revalidation)
Range requests (HTTP 206): Single byte ranges are supported (bytes=0-499, bytes=-500, bytes=500-). Multi-range requests are served as full 200 responses. Unsatisfiable ranges return 416. When a Range header is present, the response is always served uncompressed so byte offsets are correct. The If-Range header is respected.
Binary downloads: Files with extensions like .zip, .tar, .exe, .dmg, .pkg, .deb, .apk, .iso, .img, .bin automatically receive Content-Disposition: attachment so browsers prompt a download dialog.
Precompression: If precompress: true, brotli (.br) and gzip (.gz) variants are loaded at startup and served when the client’s Accept-Encoding includes br or gzip. Precompressed variants are only used when smaller than the original.
Graceful shutdown
On SIGTERM or SIGINT, the server:
- Stops accepting new connections
- Waits for in-flight SSR requests to complete (up to
SHUTDOWN_TIMEOUTseconds) - Emits a
sveltekit:shutdownevent onprocess - Exits
process.on('sveltekit:shutdown', async (reason) => {
console.log(`Shutting down: ${reason}`);
await db.close();
}); Deployment
- Docker: Use
node:22-trixie-slim(glibc >= 2.38 required for the uWS native addon) - Build:
npm run build→node build - Clustering: Set
CLUSTER_WORKERS=autofor multi-core, usesSO_REUSEPORTon Linux - OS tuning: Increase
nofileulimit to 65536+ for high connection counts
# Production with clustering and TLS
SSL_CERT=./cert.pem SSL_KEY=./key.pem CLUSTER_WORKERS=auto node build Was this page helpful?