Should I be expecting the same performance as a standard Java Hashmap for in memory read/writes?

250 Views Asked by At

My use case is I need to store millions of symbols in a map where the key is a String i.e

"IBM"

and the value is a json string that has information about the symbol i.e

"{ "Symbol" : "IBM", "AssetType": "Common Stock", "Name": "International Business Machines Corporation",}".

When using a persisted ChronicleMap to store a 25 million entries of these I'm getting some bad performance vs a standard Java HashMap .

Some ball parks numbers...to insert 25 million records into a HashMap it takes about ~70 seconds vs ChronicleMap which takes ~125 seconds. Reading all the entries back from the HashMap takes 5 secs vs 20 seconds on ChronicleMap.

I set the averageKey/averageValue to sensible settings and I generously sized entries to 50 million as I saw other posts suggesting to do the same.

I'm really just asking what my expectations should be here? Are the ball park figures above in line with what ChronicleMap should be capable of compared to a normal HashMap?

Or am I wrong to treat it as a normal HashMap and actually things like the size of the data I'm putting in means I'm going to get varying performance between using a standard HashMap & ChronicleMap?

1

There are 1 best solutions below

2
On

That seems reasonable if you are persisting the data, HashMap isn't persisted. ChronicleMap is capable of much more throughput. I would look at how many threads you are using.

ChronicleMap is not a normal HashMap in that it stores copies of the data off-heap so it has no impact on GCs. There is a copy cost each way, however. Ideally, you would store the data as a Java object and not JSon, but it should still work.