We have around 20k merchants data ,size around 3mb If we cache these much data together then hazlecast performance not doing good Please note if we cache all 20k individual then for get all merchants call slowing down as reading each merchant from cache costs high network time. How should we partition these data What will be the partition key What will be the max size per partition
Merchant entity attributed as below Merchant Id , parent merchant id, name , address , contacts, status, type
Merchant id is the unique attribute
Please suggest
I wouldn't worry about changing partition key unless you have reason to believe the default partitioning scheme is not giving you a good distribution of keys.
With 20K merchants and 3MB of data per merchant, your total data is around 60GB. How many nodes are you using for your cache, and what memory size per node? Distributing the cache across a larger number of nodes should give you more effective bandwidth.
Make sure you're using an efficient serialization mechanism, the default Java serialization is very inefficient (both in terms of object size and speed to serialize and deserialize); using something like IdentifiedDataSerializable (if Java) or Portable (if using non-Java clients) could help a lot.