What’s new in 0.5
0.5 is a substantial release. Most of the value is invisible (security defaults, capacity bounds, production assertions), but the surface-level additions cover real production needs that 0.4 left to hand-rolling.
If you are upgrading from 0.4, see Upgrade Quickstart for the 5-minute changelog. This page is the feature tour.
Streaming uploads
live.upload() is a first-class streaming primitive. The handler consumes chunks via for await over ctx.stream; the client paces sends against the WebSocket buffered amount; the server auto-discovers the adapter’s maxPayloadLength and right-sizes chunks. Includes mid-stream re-auth via reauthEvery, abortable handles with progress events, and per-upload + aggregate buffer caps.
export const avatar = live.upload(async (ctx, name, mime) => {
const writer = await storage.createWriter(`avatars/${ctx.user.id}/${name}`, { mime });
for await (const chunk of ctx.stream) {
if (ctx.signal.aborted) break;
await writer.write(chunk);
}
await writer.close();
return { path: writer.path };
}, { maxSize: 25 * 1024 * 1024, reauthEvery: 16 * 1024 * 1024 }); In 0.4 you used live.binary and chunked uploads by hand against maxPayloadLength.
Cluster primitives
The extensions package added 11 cluster-grade primitives so you can run multi-instance production without hand-rolling distributed concerns:
createLeader- Redis-lease leader election. Plug intoconfigureCron({ leader })for one-cron-per-cluster instead of N-crons-per-N-workers.createDistributedLock- cluster-wide mutex withwithLock(key, fn, { maxWaitMs }). Same API shape as the in-process adapter lock.createDistributedSession- Redis-backed session store with sliding TTL.createConnectionRegistry- cluster-aware registry that backslive.push/live.notifyacross instances.createTaskRunner- durable Postgres-backed task runner with caller-retry idempotency, worker-crash recovery, and external-service idempotency.createJobQueue- PostgresFOR UPDATE SKIP LOCKEDqueue.createIdempotencyStore- three-state acquire (acquired/pending/result) for both Redis and Postgres.createShardedBus- SPUBLISH/SSUBSCRIBE sharded pub/sub (Redis 7+).createFunctionLibrary- Redis Functions wrapper (Redis 7+).createAdmissionControl- per-class message-tier admission rules.createPublishRateAggregator- cluster-wide top-publisher view.
In 0.4 these did not exist; running multi-instance meant hand-rolling each one.
Lifecycle hooks
hooks.ws.js now supports init({ platform }) and shutdown({ platform }) for once-per-worker setup and teardown:
export async function init({ platform }) {
setCronPlatform(platform);
live.configurePush({ remoteRegistry: registry });
await bus.activate(platform);
}
export async function shutdown() {
await leader.stop();
await bus.deactivate();
} start() awaits init before declaring the worker ready, which eliminates the boot-to-first-connect window where cron ticks fired into a no-op platform and live.push couldn’t reach cross-instance users.
In 0.4 you stuffed all of this into the open hook on first connect and hoped nothing depended on platform before someone opened a WebSocket.
Session resume protocol
Reconnects are no longer full restarts. The server stamps a per-connection sessionId and announces it via {type:'welcome', sessionId}. The client persists it in sessionStorage and presents the prior id plus per-topic lastSeenSeqs on reconnect; the server replies {type:'resumed'} after the optional resume hook awaits.
Pair with the replay extension’s resumeHook() for cluster-wide gap-free reconnection:
export const resume = replay.resumeHook(); In 0.4 reconnects refetched everything; clients saw flicker on busy boards.
Time-windowed aggregates
live.aggregate({ windows }) supports lifetime, tumbling, and sliding windows. IANA timezone boundaries via Intl.DateTimeFormat so DST and tz offsets behave correctly. Built-in combiners (combineSum, combineMax, combineMin, combineCounts, combineMerge) for sliding-window fold.
export const requestStats = live.aggregate('requests', reducers, {
topic: 'requests-stats',
windows: {
lifetime: { type: 'lifetime' },
today: { type: 'tumbling', period: 'daily', tz: 'America/New_York' },
last5min: { type: 'sliding', durationMs: 5 * 60_000, slideMs: 30_000 }
}
}); Six-field cron expressions
live.cron gained sub-minute granularity via an optional leading seconds field:
live.cron('*/10 * * * * *', 'tick', async () => { /* every 10 seconds */ }); Once any 6-field schedule registers, the cron tick adapts from 60s to 1Hz. 5-field schedules still fire only at second :00.
configureCron({ leader, bus }) (details) wires cluster-wide one-firing-per-tick via leader election and bus fan-out.
New platform surface
platform gained 12+ new methods covering server-initiated request/reply, authorized server-side subscribe, backpressure-aware primitives, and cluster-relay batching:
platform.subscribe(ws, topic)/platform.checkSubscribe(ws, topic)- server-initiated subscribe that routes through the user’ssubscribehook for authorization.platform.request(ws, event, data, opts?)- server-to-client request/reply with timeout.platform.publishBatched(messages)- one wire frame per affected subscriber, with optional per-eventcoalesceKeycollapsing.platform.sendCoalesced(ws, { key, topic, event, data })- per-connection send with coalesce-by-key.platform.pressure/platform.onPressure(cb)- worker-local backpressure signal with'MEMORY' | 'PUBLISH_RATE' | 'SUBSCRIBERS'precedence.platform.onPublishRate(cb)- per-topic publish-rate detection.platform.requestId- UUID per HTTP/WS connection for cross-layer log correlation.platform.maxPayloadLength/platform.bufferedAmount(ws)- backpressure-aware primitives for uploads and other paced senders.
Capacity model and production assertions
Every internal Map / Set has an explicit upper bound with documented saturation behavior. Constants are importable for tests and operator overrides:
import {
MAX_PRESENCE_REF,
MAX_PUSH_REGISTRY,
MAX_AGGREGATE_BUCKETS
} from 'svelte-realtime/server'; Production assertions track invariant violations as Prometheus counters (svelte_realtime_assertion_violations_total{category} and extensions_assertion_violations_total{category}). A non-zero rate is a framework bug, not an app bug.
See Architecture - Capacity model and Production Limits.
Security defaults
0.5 closes multiple latent fail-open bugs from 0.4 and tightens defaults across the board:
- Async access predicates correctly deny (was silently allowing every request).
live.idempotentcache key is namespaced by RPC path (closes cross-RPC cache replay).- Wire-level subscribes to
__-prefixed system topics rejected by default. ctx.publish('__*')throws (closes framework-internal-frame spoofing).- Wire-topic accept set restricted to printable ASCII (closes BiDi spoofing, zero-width chars).
/__ws/authPOST requiresx-requested-with/Sec-Fetch-Site/ matchingOrigin(CSRF defense).- Dynamic compression skipped for credentialed responses (BREACH defense).
- Refuse to start on
same-originpolicy without host pin. - Cookie
path/domainvalidated against same char class as values. - SSR dedup cache key includes
base_origin(closes virtual-hosting cross-tenant leak).
Bus envelope validation, replay subscribe-authorization, and Redis URL password redaction round out the extensions side.
Each default is opt-out for apps that need legacy behavior; see Adapter Configuration -> Security flags.
Adapter plugins
Three new bundled adapter plugins for in-process use:
createLock- per-key FIFO mutex with bounded wait.createSession- in-process session store with sliding TTL.createDedup- in-process dedup with fixed-window TTL.
Each has a matching cluster-wide variant in extensions (createDistributedLock, createDistributedSession, createIdempotencyStore). Same API shape; swap the import to switch scopes.
Wire format improvements
- Five-state connection status (
connecting/open/suspended/disconnected/failed) instead of three.suspendedcovers backgrounded tabs;failedis terminal. - Subscribe denial protocol. Each subscribe carries a numeric
ref; the server replies{type:'subscribed', topic, ref}or{type:'subscribe-denied', topic, ref, reason}. Reasons land on the client’sdenialsReadable. presence_state/presence_diffwire shape. Microtask-batched diffs replace the five-event format. Bundled clients handle transparently.- Microtask-batched initial subscribes. N same-tick subscribes coalesce into one
subscribe-batchframe. - Per-topic monotonic
seqon every broadcast. Foundation for session resume.
TypeScript transport for SSR errors
realtimeTransport() from svelte-realtime/hooks registers serialization for RpcError and LiveError across the SvelteKit SSR / client boundary so typed errors arrive at +error.svelte with their class and code intact:
// src/hooks.js
import { realtimeTransport } from 'svelte-realtime/hooks';
export const transport = realtimeTransport(); Opt-in; if you do not catch errors by instanceof or err.code in your error boundary, you do not need it.
Where to go next
- Quick Start - new project in 30 seconds.
- Manual Setup - add 0.5 to an existing SvelteKit project.
- Upgrade Quickstart - 0.4 -> 0.5 in 5 minutes.
- Migration 0.4 to 0.5 - full upgrade guide.
- Architecture - the technical tour, top to bottom.
Was this page helpful?