Distributed Lock
createDistributedLock(redis, options?) is the Redis-backed counterpart to the adapter’s createLock. Same withLock(key, fn) API shape; cluster-wide instead of in-process.
Setup
import { createRedisClient } from 'svelte-adapter-uws-extensions/redis';
import { createDistributedLock } from 'svelte-adapter-uws-extensions/redis/lock';
const redis = createRedisClient();
const lock = createDistributedLock(redis, { ttlMs: 30_000 }); Usage
await lock.withLock(`order:${orderId}`, async () => {
const current = await db.getOrder(orderId);
return processOrder(current);
}, { ttlMs: 60_000, maxWaitMs: 5_000 }); The lock is renewed in the background while the handler runs. If the handler holds the lock for longer than ttlMs and renewal fails, the call rejects with LockLostError and the handler receives an aborted signal.
Errors
import {
LockAcquireTimeoutError,
LockLostError,
LockQueueFullError
} from 'svelte-adapter-uws-extensions/redis/lock';
try {
await lock.withLock(key, fn, { maxWaitMs: 5000 });
} catch (err) {
if (err instanceof LockAcquireTimeoutError) {
// Did not acquire within maxWaitMs.
} else if (err instanceof LockLostError) {
// Lease expired mid-handler. Renewal failed; another worker may now hold the lock.
} else if (err instanceof LockQueueFullError) {
// Waiter queue is at capacity for this key. Sync reject, no wait.
// err.code === 'LOCK_QUEUE_FULL', err.key, err.maxWaitersPerKey
} else {
throw err;
}
} API
| Method | Description |
|---|---|
withLock(key, fn, opts?) | Run fn exclusively per key cluster-wide. Returns the handler’s return value. |
Options
| Option | Default | Description |
|---|---|---|
ttlMs | 30_000 | Initial lease duration. |
maxWaitMs | 5_000 | Max time to wait for the lock. Rejects with LockAcquireTimeoutError past this. |
signal | unset | External AbortSignal. Rejects with LockAcquireTimeoutError on abort while waiting. |
renewMs | ttlMs / 3 | Renewal interval. |
maxWaitersPerKey | 1000 | Flood control. Past this many pending waiters for the same key, new acquires reject synchronously with LockQueueFullError instead of growing the queue. Configurable per-call. Pass Infinity to disable. |
Flood control
maxWaitersPerKey limits how many pending acquires can stack up behind a single hot key. Past the cap, new calls reject synchronously (no wait) with LockQueueFullError:
import { LockQueueFullError } from 'svelte-adapter-uws-extensions/redis/lock';
try {
await lock.withLock(`hot:${id}`, fn);
} catch (err) {
if (err instanceof LockQueueFullError) {
// err.key === `hot:${id}`
// err.maxWaitersPerKey === 1000
return reply.tooManyRequests();
}
throw err;
} In-flight callers (the one currently holding the lock plus any waiters already in the queue) are unaffected. The cap kicks in only at admission: a new caller arrives, sees the queue is at capacity, and rejects immediately. This bounds the memory cost per hot key and prevents an unbounded buildup that would degrade the lock’s tail latency.
The default 1000 is well past what a healthy contention pattern produces. Keys consistently at the cap are usually a symptom of unbounded fan-in to a single sequenced resource - the right fix is application-level (sharding, queueing, or dropping the lock entirely), not raising the cap.
Retry sleep jitter
The internal acquire loop sleeps retryDelayMs +/- 25% between SET NX attempts. With N callers contending on the same key, every lock-handoff would otherwise produce an N-way SET NX burst on the Redis socket - one winner, N-1 losers, all retrying again at the next tick. The jitter smears the retries over [0.75x, 1.25x] of the configured base, avoiding the stair-step latency pattern of synchronized retries. The expected wait is unchanged; the operator-visible win is a smoother Redis CPU profile under contention.
Metrics
| Metric | Type | Labels |
|---|---|---|
lock_acquired_total | counter | key_class |
lock_acquire_wait_ms | histogram | key_class |
lock_acquire_timeouts_total | counter | key_class |
lock_lost_total | counter | key_class |
key_class is the leading colon-segment of the key ('order', 'migration') so per-key cardinality stays bounded.
Backed by shared/lease-scripts.js (LEASE_RENEW_SCRIPT, LEASE_RELEASE_SCRIPT); same lease primitives back createLeader.
See also
- Adapter
createLock- in-process variant with the samewithLockAPI. live.lock- realtime-side wrapper.
Was this page helpful?