Spring Boot Out of Memory Due to Kafka Common Metrics

1.1k Views Asked by At

We have a Spring Boot app, which went Out of Heap Memory Last Week. Since the HeapDumpOnOutOfMemory flag was enabled, we could see in a 4GB Heap, around 3.7GB space was occupied by com.sun.jmx.mbeanserver.NamedObject.

All these Objects had key/value entries as client-id=producer-XXXX,type=producer-metrics.

We searched in our logs and these producers had been closed around a few weeks back.

Why are these Objects not getting Garbage Collected? Is this the default behavior of JMX Beans? Haven't seen this in any other application can we disable JMX for Kafka Producers?

We're using Spring Boot version 2.1.5.Release along with Spring-Kafka version 2.2.6.Release.

Code:-

ProducerRecord<String, String> record = new ProducerRecord<>("topic", message);
producer.send(record, (RecordMetadata metadata, Exception exception) -> {
if (exception != null) {
    logger.error("Exception in posting response to Kafka {}", exception);
    logger.error(exception.getMessage());
} else {
    logger.info("Request sent to Kafka: Offset: {} ", metadata.offset());
}
producer.close();
});

public KafkaProducer<String, String> getProducer(String bootstrapServers){
    Properties property = new Properties();
    property.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
    property.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
    property.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
    return new KafkaProducer<>(property);

}
0

There are 0 best solutions below