Heapdump file generated is very small in size when the pod goes Heap Out of Memory (OOM)

297 Views Asked by At

We have a kubernetes pod going out of memory very frequently, but the heapdump file that gets generated during OOM is only 200 MB while Xmx and Xms are defined at 2400 MB. So it looks like GC is able to clean the objects and bring down the heap when OOM is signalled. If that is the case I am not able to understand why jvm still goes and kills the pod even though heap usage went down after GC (as per the heap dump).

Here are the jvm parameters:

-XX:HeapDumpPath=xxx -Xmx2400m -Xms2400m -XX:+HeapDumpOnOutOfMemoryError -XX:+UseG1GC -XX:CICompilerCount=4 -XX:MaxGCPauseMillis=1000 -XX:+DisableExplicitGC -XX:ParallelGCThreads=4 -XX:OnOutOfMemoryError=kill -9 %p

I expected that the pod does not get killed since GC is probably able to clean the heap objects

2

There are 2 best solutions below

1
On

Just to doublecheck - what does your kubectl describe pod say?

Is the exit reason is OOMKilled (k8s killed the pod for using more than memory limit) or something else? For me the source of java memory issues in k8s is usually misconfiguration of jvm flags and k8s resouce limits which results to OOMKilled. Usually the problem was somewhere else then too much heap used.

If you're actually hitting out of memory error in JVM heap and your kill options take place, then Im sorry for spam.

0
On

Im guessing you did configure also resoucce limits (memory) on the pod right?

OOM usually happens when jvm (actually any process in the container) allocates more memory then is allowed.

Older but related thread https://stackoverflow.com/a/53278161/4459920

  • if you can, fiddle just with maxrampercentage and resource limits