What are the possible reasons behind the java lang outofmemoryerror java heap space in elasticsearch

1.2k Views Asked by At

I have seen lots of java lang outofmemoryerror java heap space in elasticsearch but I could find any help page that describes the possible reasons behind these errors in elastic search. I am getting errors for example:

(2015-04-09 13:56:47,527 DEBUGaction.index Emil Blonsky observer: timeout notification from cluster service. timeout setting 1m, time since start 1m) Caused by: java.lang.OutOfMemoryError: Java heap space:
2

There are 2 best solutions below

0
On BEST ANSWER

Possible reasons (some of them):

  1. putting too much data into that memory, especially because of fielddata (used for sorting, aggregations mostly)
  2. configuration mistake, where you thought you set something for heap size, but it was wrong or in the wrong place, and your node starts with the default and that value (min 256MB, max 1GB) is not enough
  3. putting too much data because of very heavy indexing, for example a bulk size that's way too large
  4. querying using a very large (depends on how much memory you have, but a 2 billion will surely bring the cluster down) "size"
  5. especially for master nodes (master eligible nodes) that don't have enough memory - the cluster state is a likely culprit. The cluster state can get very large if there are a lot of aliases defined for each index.

An OOMed node needs to be restarted, btw.

0
On

I can't speak to your question directly, but there are a couple of approaches to this type of problem that I've found useful in the past:

  1. Use JVisualVM to inspect the contents of the heap. JVisualVM is a free tool that's shipped with the JDK. It lets you inspect details of running JVMs, including a full dump of the heap.

  2. If you suspect the error is simply due to the JVM not having enough memory available, you can increase it manually via heap parameters reference.