Common gotchas
Things in svelte-realtime that look like they should work but don’t, or behaviors that differ from other realtime libraries enough to trip you up the first time. None of these are bugs - each one is documented in MIGRATION.md or the relevant feature page. This is the consolidated reference so you can search for the symptom and find the explanation.
If you’re upgrading from 0.4, see also the Upgrade Quickstart.
Auth and access
Async sub-predicates inside live.access.any / all are awaited (since 0.5)
const access = live.access.any(
live.access.role({ admin: true }),
async (ctx) => await checkSomething(ctx)
); Pre-0.5 this composition silently allowed every request because Array.prototype.some / every read a Promise<false> as truthy and short-circuited to allow. As of 0.5 both helpers return Promise<boolean> and await each sub-predicate in order, so async denies correctly deny.
If you’re upgrading and a previously-permissive predicate now denies, that’s the framework doing its job - audit the deny path.
ctx.publish('__system:...', ...) throws
ctx.publish('__signal:userId', 'data', payload); // throws LiveError('INVALID_TOPIC') Topics starting with __ are reserved for framework-internal frames (__signal:, __presence:, __replay:, __group:, __rpc). Apps could previously spoof these via ctx.publish; 0.5 closes that. If you genuinely need to broadcast on a system-prefixed topic (rare), reach for ctx.platform.publish(...) directly - the intent is explicit at the call site.
Default select on presence / cursor strips sensitive keys
// upgrade() returns { id, name, role, token }
// presence broadcasts: { id, name, role } - 'token' is stripped automatically The default select on createPresence and createCursor strips keys starting with __ and keys matching /token|secret|password|auth|session|cookie|jwt|credential/i. The framework also warns once per process if userData contains any of these keys, even if they’re stripped.
To intentionally include a sensitive field, pass an explicit select:
createPresence(redis, { select: (ud) => ({ id: ud.id, token: ud.token }) }); The warning still fires, but it’s now informational rather than actionable.
live.idempotent keyFrom must encode tenant scope
// Two tenants, same item ID, same key -> they share a cache slot:
live.idempotent({ keyFrom: (ctx, itemId) => itemId }, handler);
// Encode tenant explicitly:
live.idempotent({ keyFrom: (ctx, itemId) => `${ctx.user.tenantId}:${itemId}` }, handler); 0.5 namespaces the cache key by RPC path automatically ('rpc:' + path + ':' + userKey) but cannot guess your tenant shape. If your app has tenant isolation, encode it in keyFrom yourself.
Connection states
$status === 'closed' doesn’t exist
{#if $status === 'closed'} <!-- never fires in 0.5 --> 0.5 split the single 'closed' state into three: 'disconnected' (transient, will retry), 'failed' (terminal: auth denied, max retries, or close() called), and 'suspended' (open but tab backgrounded). Pick the right one for your case:
- “Connection lost, reconnecting” ->
$status === 'disconnected' - “Gave up trying” ->
$status === 'failed' - “Anything not connected” ->
$status === 'failed' || $status === 'disconnected'
There is no automatic back-compat shim. The 5-state model is more informative on purpose.
hooks.ws.js must export unsubscribe
// 0.4 was fine with this:
export { message, close } from 'svelte-realtime/server';
// 0.5 requires this:
export { message, close, unsubscribe } from 'svelte-realtime/server'; If you forget unsubscribe, the app still runs but presence and replay state for topics the client drops will leak until the connection fully closes. Symptoms: stale users in presence rosters, silent topic warnings after page navigation.
Spinner hangs forever after browser-back to a { replay: true } stream (fixed in 0.5.1)
<!-- src/routes/board/[id]/+page.svelte -->
<script>
import { board } from '$live/collab';
const data = board.data(params.id);
</script>
{#if $data === undefined}
<Spinner /> <!-- pre-0.5.1: never resolves on browser-back -->
{:else}
...
{/if} Symptom: open a page that subscribes to a dynamic-topic stream with replay: true, navigate away (page unmount triggers the stream’s deferred cleanup), then browser-back to the same page. Pre-0.5.1, the cached store’s first subscribe sent the prior session’s seq to the server; the server treated it as a session-resume, platform.replay.since(topic, seq) returned [] (nothing happened on the topic during the unmount window), and the response branch called store.set(currentValue) against a currentValue that cleanup had reset to undefined - the spinner re-rendered and never resolved.
Fixed in svelte-realtime@0.5.1 - cleanup() now resets the session-resume cursors (_lastSeq, _lastVersion, _schemaVersion, _cursor, _hasMore, _loadingMore) alongside currentValue, so the next subscribe is a fresh load instead of a hung session-resume. In-session WebSocket reconnects do NOT go through cleanup(), so the gap-fill replay path for sleep/wake and transient drops is preserved.
If you see this symptom, upgrade svelte-realtime to ^0.5.1.
Stream errors live on .error, not on the data value
<!-- doesn't work in 0.4.21+ -->
{#if $messages?.error}
<p>{$messages.error.message}</p>
{/if}
<!-- correct: -->
<script>
const err = messages.error;
</script>
{#if $err}
<p>{$err.message}</p>
{/if} The stream store value always holds your data type (or undefined while loading). Errors and status live on separate Readable stores so $messages?.filter(...) never crashes with TypeError.
Uploads
live.binary vs live.upload - pick the right one
| You want… | Use |
|---|---|
| One-shot ArrayBuffer payload, under ~10 MB, no progress UX | live.binary |
| Streaming multi-MB / GB uploads, progress UX, mid-stream re-auth, cancellation | live.upload |
live.binary is simpler. live.upload adds backpressure, chunked transfer, abort signals, progress events, and the reauthEvery option for long-running uploads where session revocation matters. They’re not competing - one is the simple primitive, the other is the streaming primitive.
Upload listeners run on every chunk-progress event
const handle = uploadAvatar(file, name, mime);
handle.on('progress', () => {
// fires every chunk - up to ~1000x/sec for fast disks
expensiveUIUpdate();
}); The progress event fires per-chunk, not per-percent. For a 100 MB upload at ~943 KB chunks (the auto-discovered default against the adapter’s 1 MB maxPayloadLength), that’s ~107 progress events. For local-disk-fast sources, they may all fire inside the same frame. Throttle the UI side if needed.
Lifecycle and platform capture
setCronPlatform(platform) in open() works, but init({ platform }) is the right place
// Works but fires only after first WS connection (boot-to-first-connect window):
export function open(ws, { platform }) {
setCronPlatform(platform);
}
// Better - fires once per worker before any open:
export function init({ platform }) {
setCronPlatform(platform);
} Cron ticks scheduled before the first WS connect will fire into a no-op platform if you wire from open. The init lifecycle hook closes that window. Same applies to live.configurePush({ remoteRegistry }), bus.activate(platform), and any other once-per-worker setup that needs platform. See Lifecycle Hooks.
init fires per worker in clustered mode
export function init({ platform }) {
// fires N times in a cluster of N workers
setCronPlatform(platform);
configureCron({ leader: () => leader.isLeader(), bus });
} init is per-worker, not per-cluster. If you want cluster-wide singleton semantics (one-cron-per-cluster instead of N-crons), layer leader election on top via configureCron({ leader, bus }) with createLeader. Without leader election, every worker fires every tick.
Defaults that flipped between 0.4 and 0.5
maxPayloadLength default raised from 16 KB to 1 MB
Apps that relied on the 16 KB cap as DoS protection should pin it explicitly:
adapter({ websocket: { maxPayloadLength: 16 * 1024 } }); Pre-0.5 the adapter matched uWS’s own 16 KB default, which forced chunked-upload frameworks into ~12 KB chunks. 1 MB is a balanced default that handles typical app payloads in a single frame. DoS protection is layered: upgradeAdmission.maxConcurrent caps connection count, maxBackpressure caps per-connection outbound queue. See Production Limits.
queue plugin maxSize default dropped from Infinity to 1,000,000
// Pre-0.5: unbounded queue
adapter({ websocket: { queue: { /* maxSize: Infinity */ } } });
// 0.5: tasks past 1M drop via onDrop
adapter({ websocket: { queue: { maxSize: Infinity } } }); // explicit opt-out Any real workload reaching 1M waiting tasks per key likely has a leak, but if you relied on unbounded growth, opt in explicitly.
live.cron no longer fires concurrently with itself
live.cron('* * * * *', 'job', async () => {
await veryLongRunningWork(); // may take >1 minute
}); If a tick fires before the previous invocation completes, the new tick is skipped (and increments cronCount{status: 'skipped'}). Long-running cron jobs that relied on overlap need to either let prior invocations finish, or split the work into independent jobs with different paths.
createCursor defaults flipped to a 60 Hz world-state tick
// 0.4 default: every cursor update broadcasts immediately
// 0.5 default: at most one bulk frame per 16 ms window per topic
createCursor(redis); // = { throttle: 16, topicThrottle: 16 } Bandwidth-efficient for the common case (collaborative whiteboards, dashboards). To restore previous immediate-broadcast behavior:
createCursor(redis, { topicThrottle: 0, throttle: 50 }); Postgres extensions
Tables: ws_* -> svti_*
Fresh deploys get svti_* tables automatically. Existing deploys upgrading from 0.4 need either an ALTER TABLE migration or per-factory table overrides:
createReplay(pg, { table: 'ws_replay' });
createIdempotencyStore(pg, { table: 'ws_idempotency' });
createJobQueue(pg, { table: 'ws_jobs' });
createTaskRunner(pg, { table: 'ws_tasks' }); See the extensions migration doc for the in-place SQL.
In-flight live.idempotent cache entries are invisible after deploying 0.5
[before deploy] cache slot: "myKey" -> result
[after deploy] cache slot: "rpc:myPath:myKey" -> miss, re-runs handler The old un-namespaced keys still exist in Redis until they TTL out (default 48 hours), but the new build won’t see them. Any in-flight retry that hits the new build re-runs the handler. Schedule the deploy for a low-traffic window if this matters, or accept the re-runs as a known one-time cost.
Dev and tooling
Two browser tabs in dev with HMR may diverge
When you save src/live/*.js in dev, the Vite plugin hot-reloads server-side handler registrations - existing subscribers keep their data and connection, but the init function only runs on new subscriptions. Two tabs open across an HMR cycle may see different init data until refreshed.
Production is unaffected - this is a dev-only HMR artifact.
npm run preview doesn’t serve WebSocket
SvelteKit’s preview server uses Vite’s built-in HTTP server which doesn’t know about WebSocket upgrades. Use npm run build && node build instead:
npm run build
node build The dev server (npm run dev) does serve WebSocket via the bundled Vite plugin.
uWebSockets.js needs the right Node + glibc
The native addon expects glibc >= 2.38 on Linux. Use node:22-trixie-slim or node:22-bookworm-slim for Docker; Alpine doesn’t work (musl, not glibc). On Windows, the Visual C++ Build Tools must be installed; on Linux, build-essential.
If npm install succeeds but node build throws “Could not load uWebSockets.js”, check the platform-base-image combination first - it’s the most common cause.
See also
- Migration 0.4 to 0.5 - executive summary of breaking changes
- Upgrade Quickstart - the 5-thing version
- Architecture - the technical tour
- Production Limits - the layered admission model
Was this page helpful?