I recently encountered an error in my application with concurrent transactions. Previously, auto-incrementing for compound key was implemented using the application itself using PHP. However, as I mentioned, the id got duplicated, and all sorts of issues happened which I painstakingly fixed manually afterward.
Now I have read about related issues and found suggestions to use trigger.
So I am planning on implementing a trigger somewhat like this.
DELIMITER $$
CREATE TRIGGER auto_increment_my_table
BEFORE INSERT ON my_table FOR EACH ROW
BEGIN
SET NEW.id = SELECT MAX(id) + 1 FROM my_table WHERE type = NEW.type;
END $$
DELIMITER ;
But my doubt regarding concurrency still remains. Like what if this trigger was executed concurrently and both got the same MAX(id) when querying?
Is this the correct way to handle my issue or is there any better way?
I have managed to solve this issue.
The answer was somewhat in the direction of Akina's Answer. But not quite exactly.
The way I solved it did indeed involved an additional table but not like the way He suggested.
I created an additional table to store meta data about transactions.
Eg: I had table_key like this
So I created a meta_journals table like this
and seeded it with all the different types of journals and the next sequence number.
And whenever I insert a new transaction to the
journals
I made sure to increment thenext_trans_no
of the correspondingtype
in themeta_transactions
table. This increment operation is issued inside the same databaseTRANSACTION
, i.e. inside theBEGIN
ANDCOMMIT
This allowed me to use the exclusive lock acquired by the
UPDATE
statement on the row ofmeta_journals
table. So when two insert statement is issued for the journal concurrently, One had to wait until the lock acquired by the other transaction is released byCOMMIT
ing.