Hi,
My cluster just crashed and the error was showing full GC Allocation failure. What can be the reason??
Hi,
My cluster just crashed and the error was showing full GC Allocation failure. What can be the reason??
You haven't given much to work with here.
What version are you on, java version? What other info can you give us?
Currently I am working on Elasticsearch 1.7. My hosts have 64Gb RAM, 30Gb of which is allocated to Heap. Lucene version 4.10.4.
Why does this happen usually?
Would it be possible for you to provide more detail? An allocation failure is an expected occurrence in a memory-managed language: it means that the heap is exhausted and it causes the memory manager to initiate a garbage collection cycle to free up space in the heap for the allocation request. So, this is normal behavior and it should not be the reported cause of a crash. Now, if the allocation failure triggers a garbage collection cycle and the memory manager is not able to evacuate enough heap for the requested allocation then the JVM will crash. But it will crash with an OutOfMemoryError: Java heap space
. So, it would help to understand where you saw this message (it's normal to see in the garbage collection logs) and what message you actually saw on the crash.
© 2020. All Rights Reserved - Elasticsearch
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant logo are trademarks of the Apache Software Foundation in the United States and/or other countries.