Not able to clear hibernate cache

799 Views Asked by At

I am using broadleaf demo application which has hibernate configured with ECache. I also have a external application which is interacting with same db directly. When I update db using external application, my broadleaf application unware of those changes throws duplicate primary key while creating new entities. I am trying to resolve this issue by clearing out the hibernate cache periodically which enables hibernate to build the cache from scratch and hence everything syncs up. I am using following code to clear out the second level cache.

Cache cache = sessionFactory.getCache();
String entityName = "someName";
cache.evictEntityRegion(entityName);

But, this doesn't seem to work.

I even tried to clear the cahche manually using JMX listeners like visualvm. But this also doesn't work. I am still getting old primary key values in my API's. Is this because only second level cache is being cleared leaving first level cache? I am stuck here. Can any one please help with this issue?

UPDATED : Let's say I have application A and B. A uses broadleaf and B uses raw SQL queries to insert into db. I create few orders using application A and then I insert few orders directly in db using application B along with I update the SEQUENCE_GENERATOR table with max(order_id) + 1. Afterward when I try to create order using application A, it throws duplicate primary key exception. I tried to debug into the issue where I found that IdOverrideTableGenerator is still giving my old primary key. This made me curious about the second level cache. Doesn't broadleaf uses SEQUENCE_GENERATOR for starting references for primary key generation and maintains current state in cache ? In my case even updating the SEQUENCE_GENERATOR doesn't ensure the fresh and unique primary key.

1

There are 1 best solutions below

6
On BEST ANSWER

You're correct in that you need L2 cache invalidation for your external imports if you want your implementation to recognize your new entities at runtime. Otherwise, you would have to wait for the configured TTL on your cache region to expire for your application to see the new records.

However, L2 cache doesn't have any direct correlation to how Hibernate determines primary keys in the case of Broadleaf. Broadleaf utilizes a table generator strategy for grabbing a batch of ids in a performant and cluster-safe way. You probably notice a table entitled SEQUENCE_GENERATOR in your schema. This table contains various id ranges that have been acquired for different domain classes. Whenever Hibernate needs to grab a new batch of ids for insertions, it will interact with this table to register a new range of ids to check out. This should guarantee that no node in the cluster will try to insert an entity with a colliding id.

In your case, you need to guarantee that an external process can perform insertions in a non-colliding manner. To do so, I believe you need to create an API for the external process to call that will perform this same "id checkout" operation on behalf of that calling process. Then, your import code (presumably housed elsewhere) will have a range of ids it can safely use. The code backing the API you create should perform the same operation that Hibernate would normally perform to acquire a batch of ids for entity insertions. You can review org.hibernate.id.enhanced.TableGenerator for an example of what this looks like and create something similar for your own purposes.