Postgres’s transactional semantics are really useful when building a queue, because of how it interacts with the various pieces.
Connection 1
LISTEN 'job-updates';
Connection 2
BEGIN;
INSERT INTO jobs ('a-uuid', …);
SELECT PG_NOTIFY('job-update', 'json blob containing uuid and state change info');
COMMIT;
Connection 3 (used when Connection 1 is notified)
BEGIN;
SELECT id, … FROM jobs WHERE id = 'a-uuid' FOR UPDATE SKIP LOCKED;
UPDATE 'jobs' SET state = 'step1_completed' WHERE is = 'a-uuid';
SELECT PG_NOTIFY('job-update', 'json blob containing uuid and state change info');
-- do the thing here: computation, calling external API, etc. If it fails then rollback.
COMMIT;
Because notify has transactional semantics, the notify only goes out at transaction commit time. You want to use a dedicated connection for the notify.
The only downsides I immediately think of are you will have every worker contending to lock that row, and you’ll need to write periodic jobs to cleanup/retry failures.
Connection 1
Connection 2 Connection 3 (used when Connection 1 is notified) Because notify has transactional semantics, the notify only goes out at transaction commit time. You want to use a dedicated connection for the notify.The only downsides I immediately think of are you will have every worker contending to lock that row, and you’ll need to write periodic jobs to cleanup/retry failures.