We are planning to port our C++ in-memory DB app into Java. We are looking to using Hazelcast as a in-memory DB solution in Java parlance.
The required throughput from a system having 40TB of data is 30k reads and writes per sec. Since the amount of data in memory is large we cannot compromise on throughput once the system goes down.
An in house implementation using C++ provides us a flexibility of storing this data in shared memory along with the disk storage. Once the application restarts we can recover it by attaching the process back to shared memory file.
Can we have a similar functionality available in Hazelcast as well? Or is there some similar in-memory data grid solution where we can have this functionality?
Currently Hazelcast doesn't have a disk overflow feature; but our guys are working on it and hopefully it will be available in Hazelcast 3.3.
So you need to make use of a custom MapLoader/MapStore interface you can connect to a Map instance and add the persistence yourself.