Multi-tenant deployments

When several tenants share one Redis cluster or one Postgres database, every extension that talks to those backends needs explicit tenant scoping. The framework does not infer tenancy - the application has to pass it through. This page is the operational checklist: which knobs each extension exposes, where the foot-guns are, and a full per-tenant wiring example.

The simpler alternative is a Redis (or Postgres) instance per tenant, plumbed through createRedisClient({ host, port, db }) at the tenant boundary. That removes most of this page. The patterns below apply when that is too expensive or not possible.

The two-namespace problem

A Redis client carries two name spaces:

  • Keys. Anything stored under a key (hashes, lists, strings, fences, idempotency results). ioredis’s keyPrefix: 'tenant-A:' automatically prepends the prefix to every key command, so GET foo becomes GET tenant-A:foo on the wire.
  • Channels. Anything published or subscribed via pub/sub (PUBLISH, SUBSCRIBE, PSUBSCRIBE). The keyPrefix option does NOT apply to pub/sub commands. A SUBSCRIBE foo is wire-literal SUBSCRIBE foo regardless of the prefix.

The asymmetry is the foot-gun. A multi-tenant deployment that sets createRedisClient({ keyPrefix: 'tenant-A:' }) and assumes full isolation is still publishing on the literal channel uws:pubsub - the same channel every other tenant on the same Redis is subscribed to. Every cross-tenant pub/sub event flows through that channel.

The fix is one knob per extension. The defaults are listed below; the override pattern is the same shape for all three.

Extensions with globally-named channels

ExtensionDefault channelOverride
createPubSubBusuws:pubsub (single channel for all topics)createPubSubBus(redis, { channel: 'tenant-A:pubsub' })
createShardedBusuws:sharded:* (one channel per topic; the * is the topic name)createShardedBus(redis, { channelPrefix: 'tenant-A:sharded:' })
createPublishRateAggregatoruws:pressure:ratescreatePublishRateAggregator(redis, { channel: 'tenant-A:pressure:rates' })

Per-tenant overrides cost zero throughput and zero memory; the operational cost is operator discipline to apply them consistently across the stack.

createPubSubBus

import { createRedisClient, createPubSubBus } from 'svelte-adapter-uws-extensions/redis';

function tenantBus(tenantId) {
  const redis = createRedisClient({ keyPrefix: `${tenantId}:` });
  return createPubSubBus(redis, { channel: `${tenantId}:pubsub` });
}

Both knobs land: the keyPrefix scopes every key the bus writes for back-pressure / health bookkeeping, and the explicit channel scopes the wire pub/sub.

createShardedBus

The sharded bus uses one Redis channel per topic (so the per-channel subscriber set is smaller than the all-bus subscriber set). The override is a prefix because the channel names are derived from topic names:

import { createShardedBus } from 'svelte-adapter-uws-extensions/redis';

const sharded = createShardedBus(redis, {
  channelPrefix: `${tenantId}:sharded:`
});

Topic-name collision. Because the sharded bus derives one channel per topic, naming a topic the same across tenants does not collide as long as channelPrefix differs. Apps that put tenant IDs INSIDE the topic name (room:tenant-A:lobby) get redundant safety; either approach works, but consistency reduces operator surprise.

createPublishRateAggregator

import { createPublishRateAggregator } from 'svelte-adapter-uws-extensions/redis/publish-rate';

const rates = await createPublishRateAggregator(redis, {
  channel: `${tenantId}:pressure:rates`
});

Postgres LISTEN/NOTIFY

The Postgres bridge createNotifyBridge takes the channel name from the caller. No default to worry about; whatever channel name you LISTEN on is what your trigger fires on. Encode tenancy explicitly:

import { createNotifyBridge } from 'svelte-adapter-uws-extensions/postgres';

const bridge = await createNotifyBridge(pg, {
  channel: `${tenantId}_orders_changed`,
  topic: 'orders'
});

The channel-name regex is [a-zA-Z_][a-zA-Z0-9_]* and reserved namespaces (pg_*, information_schema*) are rejected. Tenant IDs need to be sanitized at the application layer if they originate from user input.

Postgres table-level extensions (createReplay, createIdempotencyStore, createJobQueue, createTaskRunner) use the table option to scope; combine a per-tenant prefix with separate table sets or namespaced rows. The simpler default is one table per tenant: ${tenantId}_svti_idempotency, etc.

Identity-blind primitives

Every primitive in the extensions package is identity-blind: the caller has already authorized the action by the time it reaches the primitive. Tenant scoping is a special case of this contract.

  • createDistributedLock(...).withLock(key, fn) - the lock is on whatever key you pass. A wire-supplied key without tenant scope blocks the legitimate owner across the entire cluster.
  • createIdempotencyStore(...).acquire(idempotencyKey) - an idempotency-key collision reads another tenant’s cached commit.
  • createPresence(redis).join(ws, topic, platform) - presence groups by (topic, userKey); cross-tenant topic collision silently merges presence pools.
  • createReplay(redis).publish(topic, event, data) - replay buffer is per topic; cross-tenant topic collision merges buffer streams.

The framework will not invent a tenant ID. Derive backend keys from a server-trust identity prefix (the upgrade-handler’s tenantId) and never interpolate a wire field into a backend identifier without ownership-checking it first. See Authorization model for the canonical pattern.

End-to-end wiring

A full per-tenant stack:

// src/lib/server/tenant.js
import { createRedisClient, createPubSubBus, createPresence, createReplay } from 'svelte-adapter-uws-extensions/redis';
import { createPgClient, createIdempotencyStore, createNotifyBridge } from 'svelte-adapter-uws-extensions/postgres';
import { createDistributedLock } from 'svelte-adapter-uws-extensions/redis/lock';

const stacks = new Map();

export function tenantStack(tenantId) {
  if (stacks.has(tenantId)) return stacks.get(tenantId);

  const redis = createRedisClient({ keyPrefix: `${tenantId}:` });
  const pg = createPgClient({ /* per-tenant connection string */ });

  const stack = {
    redis,
    pg,
    bus: createPubSubBus(redis, { channel: `${tenantId}:pubsub` }),
    presence: createPresence(redis, { key: 'id' }),
    replay: createReplay(redis, { maxPerTopic: 200 }),
    lock: createDistributedLock(redis, { ttlMs: 30_000 }),
    idempotency: createIdempotencyStore(pg, { table: `${tenantId}_idempotency` }),
    notifyBridge: createNotifyBridge(pg, {
      channel: `${tenantId}_orders_changed`,
      topic: 'orders'
    })
  };

  stacks.set(tenantId, stack);
  return stack;
}

Routing per inbound message:

// src/hooks.ws.js
import { tenantStack } from '$lib/server/tenant';
import { createMessage, close, unsubscribe } from 'svelte-realtime/server';

export function upgrade({ cookies }) {
  const session = validateSession(cookies.session_id);
  if (!session) return false;
  return { id: session.userId, tenantId: session.tenantId };
}

export const message = createMessage({
  platform: (platform) => ({
    ...platform,
    publish: (topic, event, data, opts) => {
      const tenantId = opts?.tenantId ?? platform.getUserData(opts?.ws)?.tenantId;
      return tenantStack(tenantId).bus.publish(topic, event, data);
    }
  })
});

export { close, unsubscribe };

In each live() handler, derive the per-tenant primitive from ctx.user.tenantId:

import { live, LiveError } from 'svelte-realtime/server';
import { tenantStack } from '$lib/server/tenant';

export const placeOrder = live(async (ctx, payload) => {
  if (!ctx.user?.tenantId) throw new LiveError('UNAUTHENTICATED');

  const { lock, idempotency } = tenantStack(ctx.user.tenantId);

  return lock.withLock(`order:${payload.id}`, async () => {
    const slot = await idempotency.acquire(payload.idempotencyKey);
    // ...
  });
});

Tenant ID always comes from ctx.user.tenantId (server-trust, set by upgrade()), never from payload.tenantId or any other wire field.

Operational checklist

Before going multi-tenant on a shared backend:

  • Every createPubSubBus / createShardedBus / createPublishRateAggregator constructor sets an explicit per-tenant channel / channelPrefix.
  • Every createNotifyBridge channel name is tenant-segmented at the application layer.
  • Every Postgres factory sets a tenant-prefixed table option, or rows are namespaced by a tenant column at the schema level.
  • No handler reads tenantId from a wire field without ownership-checking it first.
  • Topic names that include user-controlled segments are gated through mapTopic in Prometheus metrics to bound cardinality.
  • Health / status / shared-circuit-breaker keys are scoped via keyPrefix (covers them for free) or explicit per-tenant key construction.

See also

Was this page helpful?