Leader Election

createLeader(redis, options?) is a Redis-lease leader-election primitive. Plugs into live.configureCron({ leader }) so a clustered deployment fires each cron tick once cluster-wide instead of once per worker. Also useful for any workflow that needs cluster-wide singleton semantics (advisory single-runner jobs, scheduled migrations, etc.).

Setup

import { createRedisClient } from 'svelte-adapter-uws-extensions/redis';
import { createLeader } from 'svelte-adapter-uws-extensions/redis/leader';

const redis = createRedisClient();
const leader = await createLeader(redis, {
  key: 'my-app:cron-leader',
  leaseTtlMs: 30_000,
  renewMs: 10_000
});

Usage with cron

// hooks.ws.js
import { configureCron } from 'svelte-realtime/server';
import { createLeader } from 'svelte-adapter-uws-extensions/redis/leader';
import { createPubSubBus } from 'svelte-adapter-uws-extensions/redis/pubsub';

let leader, bus;

export async function init({ platform }) {
  leader = await createLeader(redis, { key: 'cron-leader' });
  bus = await createPubSubBus(redis);

  configureCron({
    leader: () => leader.isLeader(),
    bus
  });
}

export async function shutdown() {
  await leader?.stop();
}

isLeader() is synchronous (microsecond cost) and safe to call from a tight loop. The lease is renewed in the background.

API

MethodDescription
isLeader()Sync getter. true if this instance currently holds the lease.
currentLeader()Promise<string \| null>. Returns the holder’s instance id or null if no holder.
stop()Release the lease cleanly. Safe to call from shutdown. Lua-atomic if get == instanceId then del end; the compare guard means we never accidentally delete a sibling’s lease.

Failure model: fail-closed

A renewal that throws (Redis disconnect, breaker open, network partition) drops _isLeader to false and surfaces the error to onError. The renewal interval keeps ticking so leadership can recover when Redis recovers. Errors never escape the interval - a Redis blip cannot crash the worker.

Across the cluster, a partitioned Redis means the lease expires server-side and no worker holds leadership until the partition heals - jobs miss ticks rather than double-fire. Better-safe-than-double-fire is the deliberate choice: across most cron consumers, missing one tick is acceptable while running a job twice is not.

The compare-on-mutate guard on both renew and release means a stale tick from a worker that already lost leadership cannot extend or release somebody else’s lease. Same shape and the same shared Lua scripts as redis/lock’s heartbeat - intentionally identical so the two primitives can’t drift on lease semantics.

GC-pause caveat. A long stop-the-world pause on the leader can cause brief overlap with a freshly-elected successor. Make jobs idempotent at the consumer (live.idempotent or an external dedup store). This primitive does NOT provide fencing tokens (consumer sinks for cron-style work rarely have the machinery to consume them anyway). If you need fencing for a cross-worker write path, layer createDistributedLock (which carries a LockLostError signal) on top of the leader-gated cron tick.

Options

OptionDefaultDescription
keyrequiredRedis key for the lease. Choose per logical-singleton ('cron-leader', 'migration-leader').
leaseTtlMs30_000Lease lifetime. Lost workers re-elect this many ms after their last renew.
renewMs10_000Renewal interval. Should be well under leaseTtlMs / 3.

Metrics

When wired with Prometheus:

MetricTypeLabels
leader_acquired_totalcounterkey_class
leader_lost_totalcounterkey_class
leader_renewals_totalcounterkey_class
leader_renewal_failures_totalcounterkey_class

key_class is the leading colon-segment of key (e.g. 'cron-leader', 'migration') so per-key cardinality stays bounded.

Backed by shared/lease-scripts.js (LEASE_RENEW_SCRIPT, LEASE_RELEASE_SCRIPT); the same lease primitives back createDistributedLock.

Was this page helpful?