Uploads
live.upload() is a streaming-upload primitive. The handler consumes chunks via for await over ctx.stream, the server discovers the adapter’s maxPayloadLength and right-sizes chunks automatically, and the client paces sends against conn.bufferedAmount so a fast disk does not blow up the WebSocket send queue.
For one-shot small payloads under ~10 MB, live.binary() is simpler. Reach for live.upload() when payloads can be large, when progress UX matters, or when sessions can be revoked mid-stream.
Server
// src/live/uploads.js
import { live, LiveError } from 'svelte-realtime/server';
export const avatar = live.upload(async (ctx, name, mime) => {
if (!ctx.user) throw new LiveError('UNAUTHENTICATED');
if (!mime.startsWith('image/')) throw new LiveError('VALIDATION', 'Images only');
const path = `avatars/${ctx.user.id}/${name}`;
const writer = await storage.createWriter(path, { mime });
let total = 0;
for await (const chunk of ctx.stream) {
if (ctx.signal.aborted) break;
await writer.write(chunk);
total += chunk.byteLength;
}
await writer.close();
return { path, size: total };
}); The handler signature is (ctx, ...args) => result. Args after ctx are arbitrary metadata sent at upload start (filename, mime, target id, etc.) - they go through the same JSON-validation path as live() RPCs, including live.upload({ schema }).
ctx.stream is an AsyncIterable<Uint8Array>. ctx.signal is an AbortSignal that fires on client cancel, disconnect, or mid-stream re-auth failure.
Caps
| Option | Default | Description |
|---|---|---|
maxSize | 100 * 1024 * 1024 (100 MB) | Per-upload byte cap. Aborts with OVERLOADED past the cap. |
maxConcurrentPerSession | 4 | In-flight uploads per WebSocket. Past the cap, new uploads reject synchronously with OVERLOADED. |
maxConcurrentTotal | unbounded | Total in-flight uploads across all sessions. |
maxBufferedChunks | 64 | High-water mark for the per-upload chunk queue. Backpressure is signaled to the client when reached. |
reauthEvery | unset | Re-run the module guard against the live ctx every N bytes. See Mid-stream re-auth. |
export const avatar = live.upload(handler, {
maxSize: 25 * 1024 * 1024,
maxConcurrentPerSession: 2
}); There is also a process-wide aggregate cap on the pre-handler buffer (chunk-0 frames buffered before the handler resolves): default 64 MB across all in-flight pending uploads. Chunk-0 frames that would exceed are rejected with OVERLOADED and the streamId is never registered.
Mid-stream re-auth
Pre-fix, the module guard ran once at chunk-0 and the upload kept running with the original auth grant - a session revoked mid-upload (token expiry, explicit logout, role downgrade) was ignored. reauthEvery: <bytes> re-runs the same guard against the live ctx every N bytes received past the last re-auth. If the guard rejects, the upload aborts with UNAUTHENTICATED or FORBIDDEN and the consumer observes the abort via ctx.signal.
export const longUpload = live.upload(
async (ctx, name) => { /* ... */ },
{ reauthEvery: 16 * 1024 * 1024 } // re-check session every 16 MB
); Default is unset (legacy: guard runs once at chunk-0). Opt in per-upload because not every upload has a meaningful re-auth boundary.
Client
<script>
import { __upload } from '$live';
const uploadAvatar = __upload('avatar');
let progress = $state(0);
let speed = $state(0);
async function handleFile(e) {
const file = e.target.files[0];
const handle = uploadAvatar(file, file.name, file.type);
handle.on('progress', () => {
progress = handle.progress;
speed = handle.bytesPerSec;
});
const result = await handle;
console.log('uploaded', result);
}
</script>
<input type="file" onchange={handleFile} />
{#if progress > 0}
<progress value={progress} max="1"></progress>
<span>{Math.round(speed / 1024 / 1024 * 10) / 10} MB/s</span>
{/if} __upload(path) returns a factory. Calling it with a source plus the handler’s args returns an UploadHandle that is both a Promise (resolves to the handler’s return value) and an event emitter.
Sources
Blob/FileArrayBuffer/ArrayBufferViewReadableStream<Uint8Array>
const handle = uploadAvatar(file, file.name, file.type);
const handle2 = uploadAvatar(buffer, name);
const handle3 = uploadAvatar(stream, name); Handle API
| Property | Type | Description |
|---|---|---|
then | thenable | Resolves to the handler’s return value, rejects with RpcError. |
sent | number | Bytes sent so far. |
total | number | Total bytes if known (from Blob.size / ArrayBuffer.byteLength); undefined for streams. |
chunks | number | Frames sent so far. |
progress | number | 0..1 if total known, else 0. |
bytesPerSec | number | Rolling-window throughput estimate. |
streamId | number | Server-side identifier for this upload. |
streamIdHex | string | Hex form for logging. |
abort() | method | Cancel the upload. Sends a cancel frame; resolves rejection with RpcError('ABORTED'). |
Events: progress, complete, error, cancel. All emit synchronously from the message-handling microtask.
handle.on('progress', () => updateUI(handle.progress));
handle.on('error', (err) => showError(err));
handle.on('complete', (result) => showSuccess(result)); Tuning the client pump
import { configure } from 'svelte-realtime/client';
configure({
upload: {
frameSize: 256 * 1024, // override auto-discovered wire frame size
highWaterMark: 4 * 1024 * 1024, // pause sends past 4 MB buffered
lowWaterMark: 1 * 1024 * 1024 // resume past 1 MB buffered
}
}); The defaults work for most apps. frameSize is auto-discovered from platform.maxPayloadLength: the server piggybacks the adapter cap on the first upload-response envelope, the client caches the value globally, and subsequent uploads size frames at the full discovered cap. Envelope overhead (10 bytes on chunks 1+, 12 + argsLen bytes on chunk 0) is subtracted internally so the wire frame never exceeds the cap. The first upload uses a conservative 12 KB default; subsequent uploads typically run at ~1 MB frames against the 1 MB adapter default. A frame size above the discovered cap is clamped down with a one-time dev warn.
Renamed in 0.5.
chunkSizewas the previous name; it is still accepted as a deprecated alias with a one-time dev warn pointing atframeSize. The semantics shifted: pre-rename the value was raw payload bytes per chunk with no clamp, which let an unwary user build frames slightly over the cap (envelope overhead pushed them over) and uWS closed the connection with code 1009. The new value is wire frame size with hard clamp and per-chunk envelope subtraction handled by the framework.
Backpressure
The client pump checks conn.bufferedAmount after every chunk. Past highWaterMark (default 4 MB), sends pause until the queue drains below lowWaterMark (default 1 MB). The drain poll fires every 50 ms.
On 0.4 adapters without bufferedAmount, the pump falls back to the previous unbounded-queue behavior with no source change.
Error handling
Server-side LiveError throws from inside the handler propagate to the client as RpcError:
const handle = uploadAvatar(file, name, mime);
try {
await handle;
} catch (err) {
switch (err.code) {
case 'UNAUTHENTICATED': /* session expired mid-upload */ break;
case 'OVERLOADED': /* too many concurrent uploads */ break;
case 'VALIDATION': /* arg validation rejected */ break;
case 'PAYLOAD_TOO_LARGE': /* exceeded maxSize */ break;
case 'ABORTED': /* client cancelled or disconnected */ break;
}
} Lifecycle
The handler runs to completion or until ctx.signal aborts. Aborts fire on:
- Client
handle.abort()(sends a cancel frame). - WebSocket disconnect.
- Mid-stream re-auth failure (
reauthEvery). - Cap saturation (
maxSize,maxConcurrentPerSession, aggregate pending buffer).
Whatever resources the handler holds (open file descriptors, pending DB transactions) should listen on ctx.signal and unwind. The framework auto-drains its own internal bookkeeping via the existing close hook chain.
Was this page helpful?