that is log
[2018-07-19T10:08:09,453][ERROR][o.e.b.ElasticsearchUncaughtExceptionHandler] [] fatal error in thread [elasticsearch[cvhieo8][management][T#1848]], exiting
java.lang.OutOfMemoryError: Java heap space
Heap Total
276.86 MB
Heap Used
259.31 MB
and it not on my pc , i cant ask 9200
it has used five month,why it happen now
zqc0512
(andy_zhou)
July 19, 2018, 3:12am
4
i think the data growth
so the jvm is too small
only 300m
as i know, 31G is better about per node.
thank you but how should i do ,i dont konw about jvm
dadoonet
(David Pilato)
July 19, 2018, 7:07am
6
Not really. It depends on the data you have and what you are doing.
I've seen lot of clusters with 4gb of HEAP.
I'm running locally a demo with 1Gb HEAP for 1 million documents.
So it depends...
Probably too many indices or data?
What is the output of:
GET _cat/nodes?v
GET _cat/indices?v
zqc0512
(andy_zhou)
July 19, 2018, 8:34am
7
yes. JVM depending on data
but in my env 1,000,000/s data in es
so i use 31g JVM
when i stop logstash ,elasticsearch begin to work
it tell me red to yellow
ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
127.0.0.1 59 92 12 0.12 0.45 0.61 mdi * cvhieo8
dadoonet
(David Pilato)
July 19, 2018, 9:18pm
10
Apparently you opened another thread about this at My kibana has nothing but i think is elasticsearch problem
Please keep the discussion in one single place instead.
dadoonet
(David Pilato)
July 19, 2018, 9:20pm
11
That's probably perfect for your use case but you can not recommend that as a general advice. That can mislead the other users IMO.
system
(system)
Closed
August 16, 2018, 9:22pm
12
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.