Postgres Client
Factory that wraps pg Pool with lifecycle management.
// src/lib/server/pg.js
import { createPgClient } from 'svelte-adapter-uws-extensions/postgres';
export const pg = createPgClient({
connectionString: 'postgres://localhost:5432/mydb'
}); Options
| Option | Default | Description |
|---|---|---|
connectionString | required | Postgres connection string |
autoShutdown | true | Disconnect on sveltekit:shutdown |
options | {} | Extra pg Pool options |
API
| Method | Description |
|---|---|
pg.pool | The underlying pg Pool |
pg.query(text, values?) | Run a query |
pg.createClient() | New standalone pg.Client with same config (not from the pool) |
pg.end() | Gracefully close the pool |
LISTEN/NOTIFY Bridge
Listens on a Postgres channel for notifications and forwards them to platform.publish(). You provide the trigger on your table - this module handles the listening side.
Uses a standalone connection (not from the pool) since LISTEN requires a persistent connection that stays open for the lifetime of the bridge.
When to use this: When you want database changes to become realtime WebSocket events automatically. Any INSERT, UPDATE, or DELETE on a table with a trigger fires a notification → the bridge parses it → platform.publish() pushes it to all subscribed clients. No polling, no manual publish calls in your application code.
Setup
// src/lib/server/notify.js
import { pg } from './pg.js';
import { createNotifyBridge } from 'svelte-adapter-uws-extensions/postgres/notify';
export const bridge = createNotifyBridge(pg, {
channel: 'table_changes',
parse: (payload) => {
const row = JSON.parse(payload);
return { topic: row.table, event: row.op, data: row.data };
}
}); Usage
// src/hooks.ws.js
import { bridge } from '$lib/server/notify';
let activated = false;
export function open(ws, { platform }) {
if (!activated) {
activated = true;
bridge.activate(platform);
}
} Setting up the trigger
Create a trigger function and attach it to your table:
CREATE OR REPLACE FUNCTION notify_table_change() RETURNS trigger AS $$
BEGIN
PERFORM pg_notify('table_changes', json_build_object(
'table', TG_TABLE_NAME,
'op', lower(TG_OP),
'data', CASE TG_OP
WHEN 'DELETE' THEN row_to_json(OLD)
ELSE row_to_json(NEW)
END
)::text);
RETURN COALESCE(NEW, OLD);
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER messages_notify
AFTER INSERT OR UPDATE OR DELETE ON messages
FOR EACH ROW EXECUTE FUNCTION notify_table_change(); Now any INSERT, UPDATE, or DELETE on the messages table fires a notification. The bridge parses it and calls platform.publish(), which reaches all connected WebSocket clients subscribed to the topic.
The client side needs no changes - the core crud('messages') store already handles created, updated, and deleted events.
Options
| Option | Default | Description |
|---|---|---|
channel | required | Postgres LISTEN channel name |
parse | JSON with { topic, event, data } | Parse notification payload into a publish call. Return null to skip. |
autoReconnect | true | Reconnect on connection loss |
reconnectInterval | 3000 | ms between reconnect attempts |
API
| Method | Description |
|---|---|
activate(platform) | Start listening (idempotent) |
deactivate() | Stop listening and release the connection |
Limitations
- Payload is limited to 8KB by Postgres. For large rows, send the row ID in the notification and let the client fetch the full row.
- Only fires from triggers. Changes made outside your app (manual SQL, migrations) are invisible unless you add triggers for those tables too.
- This is not logical replication. It is simpler, works on every Postgres provider, and needs no extensions or superuser access.
Replay Buffer (Postgres)
Same API as the Redis replay buffer, but backed by a Postgres table. Best suited for durable audit trails or history that needs to survive longer than Redis TTLs. Sequence numbers are generated atomically via a dedicated _seq table, so they are safe across multiple server instances.
When to use over the Redis version: When you want replay data to be truly persistent - queryable with SQL, backed up with your database, and not subject to Redis memory limits or TTL eviction. Ideal for audit logs, compliance trails, or any history you need to keep long-term.
Setup
// src/lib/server/replay.js
import { pg } from './pg.js';
import { createReplay } from 'svelte-adapter-uws-extensions/postgres/replay';
export const replay = createReplay(pg, {
table: 'ws_replay',
size: 1000,
ttl: 86400, // 24 hours
autoMigrate: true // auto-create table
}); Schema
The table is created automatically on first use (if autoMigrate is true):
CREATE TABLE IF NOT EXISTS ws_replay (
id BIGSERIAL PRIMARY KEY,
topic TEXT NOT NULL,
seq BIGINT NOT NULL,
event TEXT NOT NULL,
data JSONB,
created_at TIMESTAMPTZ DEFAULT now()
);
CREATE INDEX IF NOT EXISTS idx_ws_replay_topic_seq ON ws_replay (topic, seq);
CREATE TABLE IF NOT EXISTS ws_replay_seq (
topic TEXT PRIMARY KEY,
seq BIGINT NOT NULL DEFAULT 0
); Options
| Option | Default | Description |
|---|---|---|
table | 'ws_replay' | Table name |
size | 1000 | Max messages per topic |
ttl | 0 | Row expiry in seconds (0 = never) |
autoMigrate | true | Auto-create table |
cleanupInterval | 60000 | Periodic cleanup interval in ms (0 to disable) |
API
Same as the Redis replay buffer, plus:
| Method | Description |
|---|---|
publish(platform, topic, event, data) | Store + broadcast |
seq(topic) | Current sequence number |
since(topic, seq) | Messages after a sequence |
replay(ws, topic, sinceSeq, platform) | Send missed messages to one client |
clear() | Delete all replay data |
clearTopic(topic) | Delete replay data for one topic |
destroy() | Stop the cleanup timer |
Was this page helpful?