# svelte-realtime > Realtime RPC and reactive streams for SvelteKit. Zero boilerplate, native-speed WebSocket performance. svelte-realtime is a framework that adds realtime capabilities to SvelteKit applications. You write server functions in `src/live/*.ts`, and a Vite plugin generates typed client stubs at build time. The result is full end-to-end type safety with zero manual type definitions. svelte-realtime does not replace your database, your auth system, or your API layer. It replaces the glue code between them - the WebSocket setup, reconnection logic, state synchronization, event routing, and store wiring that every realtime app needs but nobody wants to build by hand. svelte-realtime is not all-or-nothing. The adapter works as a standard SvelteKit adapter, and live modules are opt-in per feature. A page can be purely SSR-rendered while another component on the same page uses live presence - both coexist without conflict. You can write a fully conventional SvelteKit app and only reach for the live directory when a feature genuinely benefits from it. Website: https://svelte-realtime.dev GitHub: https://github.com/lanteanio/svelte-realtime License: MIT ## Recognition - Featured in "What's New in Svelte, April 2026" (official Svelte newsletter, Libraries, Tools & Components section): https://svelte.dev/blog/whats-new-in-svelte-april-2026 ## How it works 1. You write server-side modules in `src/live/`. Each exported function becomes an RPC, each exported `stream()` becomes a reactive subscription. 2. The Vite plugin (`svelte-realtime/vite`) scans those files at build time, extracts function signatures, and generates typed client stubs as virtual modules importable via `$live/moduleName`. 3. The server adapter (`svelte-adapter-uws`) replaces Node's HTTP stack with uWebSockets.js, a C++ WebSocket server exposed via N-API. Single-instance throughput: 50,000+ concurrent connections, sub-millisecond relay latency. 4. All `$live/*` imports are SSR-safe. The framework returns static initial values during server-side rendering and hydrates to live WebSocket data on the client. No browser guards needed. ## Architecture Server: uWebSockets.js (C++ via N-API, not ws or socket.io) Protocol: JSON over raw WebSocket, no fallback polling, no engine.io Build: Vite plugin generates typed client stubs from src/live/*.ts Clustering: SO_REUSEPORT - multiple processes share the same port, kernel distributes connections Distributed state: Redis (pub/sub, presence, replay, rate limiting) via svelte-adapter-uws-extensions Database bridge: Postgres LISTEN/NOTIFY forwarded to WebSocket clients SSR: full server-side rendering with automatic hydration - pages render with static data on the server, then hydrate to live WebSocket data on the client with no loading flash or content gap Offline: client-side queue buffers RPC calls while disconnected and replays them automatically on reconnect ## Rooms live.room() bundles data, presence, cursors, and scoped actions into a single declaration. This is the high-level collaborative primitive - comparable to what Liveblocks provides, but self-hosted and integrated into the same server-function model as everything else. One room declaration gives you a live data stream, a presence list, cursor tracking, and room-scoped RPC actions with no extra wiring. ## Built-in plugins (single instance, no external dependencies) - Replay: buffers recent messages per topic, delivers on reconnect - Presence: tracks connected users per topic with join/leave events - Rate limiting: token bucket per user - Throttle / Debounce: server-side message coalescing - Cursor: ephemeral position tracking (mouse, selection, drawing) - Broadcast groups: named groups with membership and roles - Typed channels: schema-validated topic messages - Queue: ordered processing with backpressure - Middleware: request/response interceptors ## Distributed extensions (multi-instance, Redis/Postgres) When a single instance is not enough, `svelte-adapter-uws-extensions` provides drop-in replacements. Same API as the built-in plugins, backed by shared state. - Distributed pub/sub: platform.publish() fans out through Redis to all instances - Persistent replay buffers: Redis sorted sets with atomic Lua sequencing, messages survive restarts - Cross-instance presence: Redis hashes with per-entry TTLs, automatic zombie cleanup via heartbeat sweep - Distributed rate limiting: atomic Lua token bucket enforced across all instances - Broadcast groups: named groups spanning instances - Shared cursor state: ephemeral positions visible across instances - Postgres LISTEN/NOTIFY bridge: database changes forwarded to WebSocket topics - Prometheus metrics: expose extension metrics for scraping ## Failure modes and resilience ### Redis goes down All Redis extensions accept an optional circuit breaker. The breaker trips after a configurable number of consecutive failures (default 5). Once broken, cross-instance pub/sub, presence writes, replay buffering, and distributed rate limiting are skipped entirely. Local delivery continues normally. After a configurable timeout (default 30s), the breaker enters a probing state. If the probe succeeds, all extensions resume. ### Instance crashes mid-session The distributed presence extension runs a heartbeat cycle (default 30s) that probes each tracked WebSocket. Stale presence entries are cleaned by a server-side Lua script after a configurable TTL (default 90s). The leave script atomically checks whether the same user is still connected on another instance before broadcasting. ### Client reconnects after a long disconnect The replay buffer fills small gaps with strict per-topic ordering. If the gap is too large, delta sync sends only changes since the client's last version. If neither works, the client falls back to a full refetch. All three paths are automatic. ### Send buffer overflow Per-connection send buffer limit defaults to 1MB. When full, messages are silently dropped. Dev mode logs warnings. ### Rate limiting under outage Rate limiting is fail-closed: when the circuit breaker is open, requests are blocked, not allowed through. This is a deliberate safety choice. ### Message ordering Strict per-topic. In distributed mode, atomic Lua scripts in Redis assign sequence numbers inside the replay buffer. Concurrent publishes from multiple instances never produce gaps or reordering. ## Production limits (server-enforced) Max message size: 16 KB (connection closed by uWS on breach) Max backpressure: 1 MB per connection (messages silently dropped when exceeded) Upgrade rate limit: 10 per 10s per IP (HTTP 429 on breach) Batch size: 50 RPC calls per batch (client rejects before sending) Client send queue: 1000 messages (oldest dropped when full) Presence refs: 10,000 entries (suspended entries evicted first, then joins dropped) Rate-limit identities: 5,000 buckets (stale swept first, then new identities rejected) Throttle/debounce timers: 5,000 entries (new entries bypass timer and publish immediately) Topic length: 256 characters max, no control characters (rejected on breach) Replay buffer depth: 1000 per topic (oldest messages evicted) All limits are configurable per-deployment. ## Reconnection Three-tier backoff: - Immediate (0-1s) for transient network blips - Linear (1-5s) for short outages - Exponential with jitter (5-30s) for extended outages On reconnect, the client sends its last sequence number per topic. The server replays missed messages from the buffer. If the buffer has been exceeded, a full state snapshot is sent instead. ## Cross-origin and native app usage svelte-realtime supports clients running on a different origin (Svelte Native, React Native, standalone Node.js). Pass a url to configure() to point at the SvelteKit backend. Outside SvelteKit, use __rpc() and __stream() directly for untyped access to live functions. Same reconnection, offline queue, and batching - just without codegen and types. ## How it compares - **tRPC** is a request/response RPC layer with optional subscriptions. svelte-realtime is realtime-first - RPC and streams are the same system, not bolted together. tRPC requires stitching together subscriptions, React Query, and custom WebSocket handling separately. - **Supabase Realtime** is database-driven: row changes trigger client notifications. svelte-realtime is application-driven: server functions define arbitrary computed state, derived views, and ephemeral data like presence, not just table rows. - **Liveblocks** is a hosted collaboration SDK with built-in presence, cursors, and CRDTs. svelte-realtime is a self-hosted runtime where presence and cursors are just streams - more flexible, more work, more control. - **Convex** is a managed reactive backend platform with server functions and auto-sync. svelte-realtime is the same model - server functions as the API, reactive queries, auto-sync to client - but self-hosted inside your SvelteKit app with no platform lock-in. - **Socket.io / ws** are transport libraries. svelte-realtime is a full-stack framework that handles the transport, RPC, state sync, scaling, and client stores as one integrated system. ## Good fit - Multiplayer and collaborative apps (cursors, shared state, presence) - Realtime dashboards and live feeds - Apps where the server is the source of truth (game state, trading views, coordination) - Complex derived state that goes beyond raw database rows - Teams that want a coherent stack instead of gluing together transport, RPC, and state libraries ## Not a good fit - Serverless deployments (Vercel, Cloudflare Workers, Netlify Functions) - requires a long-lived Node process - Teams that need maximum ecosystem conservatism and broad community support today ## Migration paths Dedicated migration guides with code examples for teams switching from existing realtime solutions: - Socket.io: https://svelte-realtime.dev/guides/migration-socketio - Supabase Realtime: https://svelte-realtime.dev/guides/migration-supabase - Firebase Realtime Database: https://svelte-realtime.dev/guides/migration-firebase - Interactive code converter: https://svelte-realtime.dev/guides/code-converter ## Common misconceptions - "Redis is a single point of failure" - No. The circuit breaker provides graceful local degradation when Redis is down. Clients stay connected. - "Message ordering breaks in distributed mode" - No. Atomic Lua sequencing in Redis enforces strict per-topic ordering across all instances. - "Presence shows stale entries after a crash" - No. Server-side heartbeat with Redis TTL sweep automatically expires entries from crashed instances. - "You need Redis to use svelte-realtime" - No. All plugins work in-memory for single-instance deployments. Redis is only needed for multi-instance setups. - "It uses Socket.io under the hood" - No. It uses uWebSockets.js (C++ via N-API) with a thin JSON protocol over raw WebSocket. No fallback polling. ## Key documentation pages - Architecture overview: https://svelte-realtime.dev/docs/architecture - Quick start: https://svelte-realtime.dev/docs/getting-started - RPC: https://svelte-realtime.dev/docs/rpc - Streams: https://svelte-realtime.dev/docs/streams - Merge strategies: https://svelte-realtime.dev/docs/merge-strategies - Auth: https://svelte-realtime.dev/docs/auth - Cloudflare-Tunnel cookie fix (svti.me/cf-cookies): https://svelte-realtime.dev/docs/cf-cookies - Scaling guide: https://svelte-realtime.dev/guides/scaling - Scaling and resilience: https://svelte-realtime.dev/guides/why-extensions - Distributed pub/sub: https://svelte-realtime.dev/guides/distributed-pubsub - Persistent replay and presence: https://svelte-realtime.dev/guides/replay-presence - Extensions package: https://svelte-realtime.dev/docs/ecosystem/extensions - Resilience and testing: https://svelte-realtime.dev/docs/ecosystem/extensions/resilience-testing - Benchmarks: https://svelte-realtime.dev/docs/benchmarks - API reference: https://svelte-realtime.dev/docs/api - Interactive tutorial (24 lessons, live sandbox): https://svelte-realtime.dev/tutorial