PostgreSQL: Transactions, Row Locks, and Advisory Locks

Mar 27, 2026

Postgres locking behavior is confusing the first time you learn it. There is a lot to consider, and it is easy to trip up. Below is a walkthrough of what transactions guarantee, where races still appear, how row-level and advisory locks differ, and how deadlocks, key packing, and fetch-lock-refetch show up in practice.

What Are Transactions?

Transactions group database work so that either every statement in the group succeeds, or none of them do. On failure, the database rolls back the partial work. That all-or-nothing property is what people mean when they say atomic.

For example, moving money between two accounts should debit one and credit the other in a single transaction:

BEGIN;
 
UPDATE accounts SET balance = balance - 100 WHERE id = 1;
UPDATE accounts SET balance = balance + 100 WHERE id = 2;
 
COMMIT;

If either UPDATE fails (or the connection drops before COMMIT), Postgres does not leave half the transfer applied unless you explicitly choose weaker semantics.

Transactions in Code

SQL alone cannot send email, call another service, or run arbitrary app code. Only your program generally can do that. Database drivers usually make it easy to run several statements in one transaction and put your own logic in between.

tx, _ := db.Begin()
defer tx.Rollback()
 
var id int64
var email string
tx.QueryRow(`
    SELECT id, email
    FROM users
    WHERE email_sent <> 'done'
    LIMIT 1`,
).Scan(&id, &email)
 
sendEmail(email)
 
tx.Exec(`
    UPDATE users
    SET email_sent = 'done'
    WHERE id = $1`, id,
)
tx.Commit()

Here we open a transaction, read one pending user, send mail, mark the row, and commit. The database still only makes the SELECT and UPDATE atomic together, it does not make “send exactly one email” atomic with those writes.

If the process crashes or the transaction rolls back after sendEmail but before COMMIT, the user still has the pending status. Rerunning this program or a retry can load the same user again and send a second email as the row was never updated.

MailDatabaseWorkerMailDatabaseWorkercrash beforeCOMMITBEGINSELECT pending userAlicesendEmail(Alice)BEGIN (retry)SELECT pending userAlice (still pending)sendEmail(Alice) again

Transactions coordinate database work. They do not by themselves give you application-level atomicity around side effects.

Transactional Race Conditions

Transactions do not eliminate every race condition, especially when application code sits between database reads and writes. A classic example is two workers that each read a balance, then in the application code check if the account has enough money to send $30, and then writes a new balance to the database. If the reads happen at the same time, both workers can run into an issue of using the same initial number in their math, which leads to a write based on the now stale number.

accountsTransfer BTransfer AaccountsTransfer BTransfer ABoth see $40,both approve a $30 debitTwo $30 debits approvedbalance should be -$20but is $10SELECT balance(gets $40)SELECT balance(gets $40)UPDATE balance = $10UPDATE balance = $10

With interleaving transactions as shown above, you can end up with inconsistent totals, overdrafts, or violations of invariants your application relies on—for example, that account balances stay non-negative—unless you add stronger coordination. This is sometimes called a check-then-act or a time-of-check to time-of-use (TOCTOU) race condition.

One fix is to push the invariant into the database so the check and write are not two separate round-trips that another session can slip between:

UPDATE accounts
SET balance = balance - 30
WHERE id = 1 AND balance >= 30;

With the guard against negative balances in our SQL statement, the check and write happen in one atomic query, so the second transfer cannot reuse a stale balance. Here are the same two 30-dollar debits from before, coming in at the same time.

The first statement updates the row and subtracts $30, the second runs the same UPDATE but matches no rows, the balance in the table does not move again. The application code can detect that no rows were updated, and mark the $30 transfer as failed or not enough money.

accountsTransfer BTransfer AaccountsTransfer BTransfer Abalance starts at 40WHERE clause not metquery succeeds but changes nothingUPDATE ... SET balance = balance - 30WHERE id = 1 AND balance >= 301 row updated, balance 10UPDATE ... SET balance = balance - 30WHERE id = 1 AND balance >= 300 rows updated, balance still 10

Not every invariant fits into a SQL query, you may need external APIs, other services, or even other databases. Remember a transaction only covers what runs inside that database session, so any work outside it needs its own coordination. Otherwise, you can still run into race conditions with other systems if you don’t carefully manage it.

The next example is a different failure mode: stale reads across a service boundary, not necessarily bad arithmetic on money.

Consider a pattern like this: read the user from Postgres, pull personalization information from a notification service, send an email, then record last_notified_at in the same transaction.

tx, _ := db.Begin()
defer tx.Rollback()
 
var username, email string
tx.QueryRow(`
    SELECT username, email
    FROM accounts
    WHERE id = $1`, id,
).Scan(&username, &email)
 
// lives outside the database
prefs, _ := notificationSvc.GetPreferences(id)
mailer.Send(email, buildEmail(username, prefs))
 
tx.Exec(`
    UPDATE accounts
    SET last_notified_at = NOW()
    WHERE id = $1`, id,
)
tx.Commit()

Postgres is only transactional for the SELECT and UPDATE queries on accounts. The call to notificationSvc is a separate system. Another client can change preferences there while this request is in flight, and your transaction will not be able to detect that change and will not roll back or block it.

Notification APIPostgresClient 1Notification APIPostgresClient 1Another client saves{theme: 'light'}Sent email{theme: 'dark'}BEGIN, SELECT account rowaliceGetPreferencescolorTheme darkUPDATE last_notified_at, COMMIT

The database transaction still commits successfully. Nothing failed in Postgres. The bug is cross-service. The email used preference data that was no longer current in the other system, and a single database transaction does not cover that. Race conditions like this are frustrating to debug because you often do not get a failed query or a rollback, only the wrong outcome.

Types of Locks

Postgres like most transactional databases uses locks to coordinate concurrent access and prevent race conditions.

For the full list, see their documentation. Table-level, page-level, and a few other sub-types are not covered here because they come up less often.

Row Level Locks

Row level locks work on a particular existing row in your database. They do not cover rows that do not exist yet.

For Update

FOR UPDATE is a row-level writer lock. The transaction that claims it is guaranteed that the locked row will not be modified by anyone else for the duration of that transaction.

A query that uses a FOR UPDATE lock looks like this:

SELECT username, email FROM accounts WHERE id = $1 FOR UPDATE;

Row locks only coordinate Postgres. They do not stop another client from calling the notification service, or any other HTTP API, while your transaction runs. They do stop a second transaction from changing the same row until you commit or roll back.

Suppose the risk is client 2 updating the account’s email while client 1 is between SELECT and COMMIT:

PostgresClient 2Client 1PostgresClient 2Client 1Blocked until C1 releases lockSend email(old@email.com)BEGIN, SELECT ... FOR UPDATEemail old@email.comUPDATE email on same rownew@email.comCOMMITUPDATE runsemail new@email.com

With FOR UPDATE, no other transaction can commit a change to that row until client 1 finishes, so the sendEmail() uses the email value from the row that this transaction locked. It is consistent with what it read, with no concurrent writer slipping in between.

That is not the same as “newest email by wall-clock time everywhere”: if another session’s update represents newer intent, it waits behind the lock.

This solves the issue, but like all locks it limits throughput.

For Share

One way to improve concurrency in transactions that do not need to write back to a given row, but still need to make sure that given row’s data has not changed while the transaction is running, is to use FOR SHARE locks.

Queries that utilize them look like this:

SELECT *
FROM users
WHERE id = 1
FOR SHARE;

They work similarly to FOR UPDATE, but they are reader locks. This means that multiple transactions can hold a shared lock on the same row at the same time, since none of them are writing to it. Writers still wait, an UPDATE or DELETE on that row blocks until the shared locks are gone.

DatabaseClient 3Client 2Client 1DatabaseClient 3Client 2Client 1All three holda shared lock at onceSELECT ... FOR SHARErow(shared lock acquired)SELECT ... FOR SHARErow(compatible, runs now)SELECT ... FOR SHARErow(compatible, runs now)COMMITCOMMITCOMMIT

These have the downside that they can starve writes to the row. Conflicting lock requests still go through Postgres’s usual wait queues, but a new FOR SHARE can be granted while a writer is already waiting, because shared locks are compatible with each other.

In a situation with high reader traffic you could run into a case where:

DatabaseWriterReader 2Reader 1DatabaseWriterReader 2Reader 1Writer finally unblocked, but more readerscan keep arriving and repeat the cycleSELECT ... FOR SHARE(acquired)UPDATE ...(blocks, waiting for readers)SELECT ... FOR SHARE(skips ahead of writer)COMMITCOMMIT

New readers can jump in front of the waiting writer because shared locks are compatible with each other. This is called write starvation. Under enough read load, writes can be delayed indefinitely.

This tradeoff is an important one to consider that depends on your specific use case and your expected query patterns.

Advisory Locking

One downside to row-level locking is that they only apply to rows that already exist in your database. What if your domain better suits locking on something that isn’t necessarily represented in a row, or a row that doesn’t exist yet?

For that we have advisory locks. They let you associate a lock with an application-chosen key (a bigint or an (int, int) pair), independent of whether a row exists for it yet. Session-scoped advisory locks can outlive a single transaction, transaction-scoped ones do not.

They look like this:

SELECT pg_advisory_lock(bigint);
 
SELECT pg_advisory_lock(int, int);

Say you had an open session that acquired this lock:

SELECT pg_advisory_lock(0);

Any other session trying to acquire 0 will be held until this one is freed.

They do solve real problems, like locking on things that are not rows or do not exist yet, but they need care because they are easy to misuse.

While row-level locks are database-enforced and guaranteed, advisory locks are entirely up to application logic to acquire. It’s easy to forget to acquire one before entering your critical section.

tx, _ := db.Begin()
defer tx.Rollback()
 
// Skip this and another caller can race.
tx.Exec(`SELECT pg_advisory_xact_lock($1)`,
    accountID)
 
var username, email string
tx.QueryRow(`
    SELECT username, email
    FROM accounts
    WHERE id = $1`, accountID,
).Scan(&username, &email)
 
// ... critical section ...
 
tx.Commit()

Session vs Transaction

There are two main types of advisory locks.

Session

Session locks must be manually released and are held for the entire duration of the connection. Since they need to be explicitly released even when transactions roll back or fail partway through, they’re easy to get wrong.

-- acquire; blocks until available
SELECT pg_advisory_lock(42);
 
-- must call this yourself
SELECT pg_advisory_unlock(42);

If your application returns an error before calling pg_advisory_unlock, the lock is held for the rest of the session, or forever on a connection pool that recycles connections.

func processAccount(db *sql.DB, accountID int64) error {
    tx, err := db.Begin()
    if err != nil { return err }
    defer tx.Rollback()
 
    // Session lock. Not tied to the transaction lifecycle.
    // Stays held until the connection closes, even on error.
    _, err = tx.Exec(`SELECT pg_advisory_lock($1)`, accountID)
    if err != nil { return err }
 
    if err := doSomething(); err != nil {
        // lock leaked; pg_advisory_unlock never runs
        return err
    }
 
    _, _ = tx.Exec(`SELECT pg_advisory_unlock($1)`, accountID)
    return tx.Commit()
}

Transaction

Transaction-scoped advisory locks are only held for the life of the current transaction. When the transaction commits, rolls back, or errors out, Postgres automatically releases them, so no manual cleanup is needed.

-- automatically released on commit or rollback
SELECT pg_advisory_xact_lock(42);

If that same control flow used pg_advisory_xact_lock instead of pg_advisory_lock, the advisory lock would go away when tx.Rollback() runs, because transaction-scoped locks end with the transaction.

Session locks do not, they are why the error path above can leak. Prefer transaction-scoped locks unless you have a specific reason to need session locks.

Avoiding Deadlocks

One of the major downsides to advisory locks is that it’s very easy to accidentally deadlock yourself.

A deadlock happens when two or more transactions each hold a lock that the other needs. Neither can proceed, so they wait forever.

Transaction 2Transaction 1Transaction 2Transaction 1Neither can proceed,deadlockedAcquires lock AAcquires lock BWaits for lock B(held by T2)Waits for lock A(held by T1)

Postgres has deadlock detection that will kill one of the transactions to break the cycle, but it is slow, and failed transactions are expensive.

Acquire Locks In Order

Acquiring locks in a predefined total order guarantees that at least one transaction will always be able to make progress. The mathematically sound argument is that a fixed order on lock IDs means the wait-for relationship cannot form a cycle, there is no circular wait, so no deadlock.

Transaction 2Transaction 1Transaction 2Transaction 1Tries lock 1,waitsAcquires lock 1Acquires lock 2Commits,releases all locksAcquires lock 1Acquires lock 2Commits

Because both transactions try for lock 1 first, one of them always wins it and is free to continue. The other waits and eventually gets its turn. No deadlock possible.

Acquire the Lock Once

This is closely related to total ordering. If lock acquisition is split across separate steps with other queries or logic in between two phases, or interleaved work. The ordering guarantee breaks down because there is no longer one consistent global sequence. Always acquire all the locks you need in a single sorted batch when you already know the full key set.

The failure is still a deadlock cycle like the first diagram, but the trigger is different. Two batches of acquisitions can interleave so neither transaction holds the full sorted set from the start.

Transaction 2Transaction 1Transaction 2Transaction 1First batch of locksSecond batch after more queriesSame cycle as beforedifferent causelock Alock Bneeds B, blockedneeds A, blocked

Packing Into Bigint

One problem advisory locks have is that they only accept a bigint or (int, int). This is limiting if your natural key is a string or two separate bigints.

Using a Hash

A hash function converts arbitrary input into a fixed-size output. We can hash a string (or any composite key) and interpret those bytes as a bigint advisory lock key.

import (
    "crypto/sha256"
    "encoding/binary"
)
 
func advisoryKey(s string) int64 {
    h := sha256.Sum256([]byte(s))
    return int64(binary.BigEndian.Uint64(h[:8]))
}

As long as you use the same hash algorithm and implementation, this is translatable across languages. You can hash a string like "user:42:payment" to always produce the same int64 key.

A good hash function will make collisions exceedingly rare, but even if they happen, overlocking is never a correctness problem. It just means two unrelated operations are briefly serialized. Too many collisions would only hurt throughput, but a good hash function should distribute your keys effectively.

Bitmasking

If you have two int64 IDs, you can pack the lower 32 bits of each into a single bigint.

SELECT pg_advisory_xact_lock(
    (($1::bigint & 4294967295) << 32) |
     ($2::bigint & 4294967295)
);

Or in Go:

func packKeys(a, b int64) int64 {
    return ((a & 0xFFFFFFFF) << 32) | (b & 0xFFFFFFFF)
}

This uses the lower 32 bits of each ID rather than the upper 32 because it tends to perform better when IDs are sequentially generated. If you used the upper bits instead, most sequential IDs would share the same upper bits, which means lots of unrelated locks would map to the same key. That turns into near-sequential lock contention even when the underlying rows are completely independent of each other.

Fetch–Lock–Refetch Pattern

Sometimes you need to do an initial fetch just to figure out which rows you need to lock. That first fetch is “unsafe” because you have not acquired any locks yet, so its results might be stale by the time you act on them.

The pattern looks like this:

  1. Unsafe fetch: query to find the candidate rows you’ll need to lock.
  2. Acquire advisory locks: lock those rows in sorted order to avoid deadlocks.
  3. Refetch: re-query the same data now that you hold the locks.
  4. Compare: if the results match, proceed. You now have exclusive access and fresh data.

The refetch is necessary because lock acquisition can take an unbounded amount of time. Another transaction may have held the locks and modified the rows while you were waiting.

Retry

If the refetch returns different results, the simplest option is to release all locks and restart the transaction from scratch. This is the easiest to implement and always correct, but under high contention it can hurt throughput.

Lock De-escalation

If the refetch returns fewer rows than the initial fetch, it is generally safe to continue because you simply over-locked. There is no harm to correctness in holding more locks than necessary.

If you used session advisory locks (pg_advisory_lock), you can release the extras with pg_advisory_unlock to reduce contention. If you used transaction locks (pg_advisory_xact_lock) Postgres does not let you drop individual keys mid-transaction. You keep the extra locks until commit or rollback, or you abort and retry.

-- only for session-scoped locks you no longer need
SELECT pg_advisory_unlock(42);

Lock Escalation

If the refetch returns more rows than the initial fetch, your locks are under-scoped and it’s not safe to continue without acquiring the additional ones.

In this case, attempt to acquire the missing locks using the non-blocking try variant:

-- returns true if acquired, false if not
SELECT pg_try_advisory_xact_lock(42);

Because you’re acquiring additional locks in a second pass, the strict total ordering from your first acquisition no longer covers everything, which means deadlocks are possible again. Using pg_try_advisory_xact_lock means you fail fast instead of blocking. If it returns false, fall back to a full transactional retry from the beginning.

Conclusion

For most cases involving existing rows, FOR UPDATE and FOR SHARE are the right tools. They are enforced by the database and automatically scoped to the transaction. Advisory locks are the right tool when you need to lock on something that is not a row: a logical resource, a not-yet-existing record, or an external concept your application cares about.

A few things to keep in mind:

References

← Back to blog