Why does Postgres sequence item go up even if object creation fails?

1.5k Views Asked by At

I have a Postgres item where one of my models is Client simply indexed by its primary key. I was having an issue creating clients because somewhere along the lines someone created a client while explicitly setting its primary key which I have read doesn't affect Postgres' sequence table for the Clients, which is responsible for auto-incrementing the primary key 1 whenever a Client object is created.

I ran some SQL queries to play around with it, and found that the current sequence value was in fact 1 lower, 262, than the highest id for Clients in the database 263 so it was saying that a Client with the ID 263 already existed. I tried creating a Client in our front end application, got the error again, and decided to re run the queries. I saw that there was no new client created in the database, as expected, but I also noticed that the sequence value did go up to 263, so when I tried creating a client again it worked!

Is this normal behavior for a PostgreSQL sequence table to increment up even if creation of its related model fails ? If so it seems like that could cause some serious issues.

1

There are 1 best solutions below

0
On BEST ANSWER

Yes, this the expected behaviour. See docs:

nextval

Advance the sequence object to its next value and return that value. This is done atomically: even if multiple sessions execute nextval concurrently, each will safely receive a distinct sequence value.

If a sequence object has been created with default parameters, successive nextval calls will return successive values beginning with 1. Other behaviors can be obtained by using special parameters in the CREATE SEQUENCE command; see its command reference page for more information.

Important: To avoid blocking concurrent transactions that obtain numbers from the same sequence, a nextval operation is never rolled back; that is, once a value has been fetched it is considered used, even if the transaction that did the nextval later aborts. This means that aborted transactions might leave unused "holes" in the sequence of assigned values.

Note that nextval is normally set as a default value for a autoincrement/serial column.

Also try to imagine how hard and inefficient it would be if nextval were to rollback. Essentially you would have to lock every client on nextval until whole transaction (the one that acquired the lock) is processed. In that case forget about concurrent inserts.

If so it seems like that could cause some serious issues.

Like what? The issue in your case was that someone manually specified a value for an autoincrement column. You should never do that unless you are a samurai. :)