My elasticsearch fatal error in thread

that is log
[2018-07-19T10:08:09,453][ERROR][o.e.b.ElasticsearchUncaughtExceptionHandler] [] fatal error in thread [elasticsearch[cvhieo8][management][T#1848]], exiting
java.lang.OutOfMemoryError: Java heap space

Heap Total
276.86 MB
Heap Used
259.31 MB

and it not on my pc , i cant ask 9200

too small memory.

it has used five month,why it happen now

i think the data growth
so the jvm is too small
only 300m
as i know, 31G is better about per node.

thank you but how should i do ,i dont konw about jvm

Not really. It depends on the data you have and what you are doing.
I've seen lot of clusters with 4gb of HEAP.

I'm running locally a demo with 1Gb HEAP for 1 million documents.

So it depends...

Probably too many indices or data?

What is the output of:

GET _cat/nodes?v
GET _cat/indices?v

yes. JVM depending on data
but in my env 1,000,000/s data in es
so i use 31g JVM

when i stop logstash ,elasticsearch begin to work
it tell me red to yellow

ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
127.0.0.1 59 92 12 0.12 0.45 0.61 mdi * cvhieo8

Apparently you opened another thread about this at My kibana has nothing but i think is elasticsearch problem

Please keep the discussion in one single place instead.

That's probably perfect for your use case but you can not recommend that as a general advice. That can mislead the other users IMO.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.