kafka streams - number of open file descriptors keeps going up

1k Views Asked by At

Our kafka streaming app keeps opening new file descriptors as long as they are new incoming messages without ever closing old ones. It eventually leads to exception. We've raised the limit of open fds to 65k but it doesn't seem to help.

Both Kafka broker and Kafka streams library is version 2.1

The error message that keeps showing up in the logs is:

org.apache.kafka.streams.processor.internals.StreamThread.run StreamThread.java: 747 org.apache.kafka.streams.processor.internals.StreamThread.runLoop StreamThread.java: 777 org.apache.kafka.streams.processor.internals.StreamThread.runOnce StreamThread.java: 883 org.apache.kafka.streams.processor.internals.StreamThread.maybeCommit StreamThread.java: 1029 org.apache.kafka.streams.processor.internals.TaskManager.commitAll TaskManager.java: 405 org.apache.kafka.streams.processor.internals.AssignedTasks.commit AssignedTasks.java: 346 org.apache.kafka.streams.processor.internals.StreamTask.commit StreamTask.java: 431 org.apache.kafka.streams.processor.internals.StreamTask.commit StreamTask.java: 443 org.apache.kafka.streams.processor.internals.StreamTask.flushState StreamTask.java: 491 org.apache.kafka.streams.processor.internals.AbstractTask.flushState AbstractTask.java: 204 org.apache.kafka.streams.processor.internals.ProcessorStateManager.flush ProcessorStateManager.java: 217 org.apache.kafka.streams.state.internals.MeteredKeyValueStore.flush MeteredKeyValueStore.java: 226 org.apache.kafka.streams.state.internals.WrappedStateStore$AbstractStateStore.flush WrappedStateStore.java: 85 org.apache.kafka.streams.state.internals.RocksDBStore.flush RocksDBStore.java: 388 org.apache.kafka.streams.state.internals.RocksDBStore.flushInternal RocksDBStore.java: 395 org.rocksdb.RocksDB.flush RocksDB.java: 1743 org.rocksdb.RocksDB.flush RocksDB.java org.rocksdb.RocksDBException: While open a file for appending: /tmp/kafka-streams/s4l-notifications-test/5_1/rocksdb/main-store/002052.sst: Too many open files status: #object[org.rocksdb.Status 0x1cca4c5c "org.rocksdb.Status@1cca4c5c"] org.apache.kafka.streams.errors.ProcessorStateException: Error while executing flush from store main-store

Any ideas how to debug it?

0

There are 0 best solutions below