Getting org.rocksdb.RocksDBException: bad entry in block

598 Views Asked by At

I'm using rocksDB to store data where the key is a string and the value is an integer. Recently my application threw the following exception while writing into rocks.

java.lang.Exception: org.rocksdb.RocksDBException: bad entry in block
    at com.techspot.store.RocksStore.encodePacket(RocksStore.java:684) ~[techspot-encoder-dev.jar:?]
    at com.techspot.store.workers.PacketEncoder.run(PacketEncoder.java:67) [techspot-encoder-dev.jar:?]
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_212]
    at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_212]
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_212]
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_212]
    at java.lang.Thread.run(Thread.java:748) [?:1.8.0_212]
Caused by: org.rocksdb.RocksDBException: bad entry in block
    at org.rocksdb.RocksDB.get(Native Method) ~[rocksdbjni-6.13.3.jar:?]
    at org.rocksdb.RocksDB.get(RocksDB.java:1948) ~[rocksdbjni-6.13.3.jar:?]
    at com.techspot.store.RocksStore.packetExists(RocksStore.java:402) ~[techspot-encoder-dev.jar:?]
    at com.techspot.store.RocksStore.encodePacket(RocksStore.java:634) ~[techspot-encoder-dev.jar:?]
    ... 6 more

The error first started occurring when the number of entries was ~350 million and the database size was ~18GB. This issue is hard to reproduce, I tried to reproduce it by putting almost ~700 million entries but couldn't do it. I'm using RocksDB version 6.13.3 and using the following options for rocksDB:

Options options = new Options();
BlockBasedTableConfig blockBasedTableConfig = new BlockBasedTableConfig();
blockBasedTableConfig.setBlockSize(16 * 1024);// 16Kb

options.setWriteBufferSize(64 * 1024 * 1024);// 64MB
options.setMaxWriteBufferNumber(8);
options.setMinWriteBufferNumberToMerge(1);

options.setTableCacheNumshardbits(8);
options.setLevelZeroSlowdownWritesTrigger(1000);
options.setLevelZeroStopWritesTrigger(2000);
options.setLevelZeroFileNumCompactionTrigger(1);

options.setCompressionType(CompressionType.LZ4_COMPRESSION);
options.setTableFormatConfig(blockBasedTableConfig);
options.setCompactionStyle(CompactionStyle.UNIVERSAL);
options.setCreateIfMissing(Boolean.TRUE);

options.setEnablePipelinedWrite(true);
options.setIncreaseParallelism(8);

Does anyone have any idea what might be the cause for this exception?

0

There are 0 best solutions below