I do have a table, Foo. I add rows to this table on certain events. The current overall design is such that duplicate messages cannot be avoided. This results in adding of duplicate rows to the table.
I cannot put a unique constraint on the table as there are different type of messages that become rows in this table. I want to avoid duplicate only for specific type of message.
Since the duplicate messages often come concurrently, and that application runs on multiple nodes, i decided to use radisson to take a distributed lock. However it does not seem to be working. I am still getting duplicate rows in the table.
Duplicate messages are detected based on userId, date and type. Below is the minimalistic demo code. I am trying to do a read before the write and this read is happening in a synchronized block across application nodes.
Appreciate any inputs on this.
if(updateEntry.getType().equals(Type.XYZ) {
java.sql.Date date = updateEntry.getDate();
String redisLockKey = "MyAPP" + "-" + userId+"-"+date.toString()+ "-" + "type-XYZ";
RLock rLock = redissonClient.getLock(redisLockKey);
rLock.lock(5, TimeUnit.SECONDS);
MyEntity myEntity = myEntityRepository.findByUserIdAndDateAndActivityType(userId,date,Type.XYZ);
if(null == myEntity) {
myEntity = new MyEntity();
// myEntity setters
myEntity = myEntityRepository.saveAndFlush(myEntity);
}
rLock.unlock();
}
I found the issue. The above code block was inside @Transactional. With spring default isolation level, it was REPEATABLE_READ at mysql. Taking the lock outside the transaction fixed the issue.