I've a ES cluster 6.6.2 composed of 3 master, 4 datanodes, 2 coordinating nodes.
2 kibana instances connecting to 2 coordinating nodes.
2 coordinating nodes are 16GB of RAM, and 7GB of Heap (min=max heap).
On coordinating nodes only I face "out of memory exception" and O.S kill jvm process for elastichsearch.
I've enabled the bootstrap memory lock, and followed documentations.
First: What kind of queries are you sending to those nodes? Are your querying a lot of shards in a single query or paginating deep?
Second: How exactly is the Elasticsearch process being killed? Can you share log file output, and is the kernel OOM killer doing this or is there an exception in the logs?
Yes, the queries are hitting several shards, usually kibana is hitting different indexes.
Most of works on Kibana.
from systemd:
Jan 21 15:08:40 elk.novalocal systemd[1]: elasticsearch.service: main process exited, code=killed, status=9/KILL
Jan 21 15:08:40 elk.novalocal systemd[1]: Unit elasticsearch.service entered failed state.
Jan 21 15:08:40 elk.novalocal systemd[1]: elasticsearch.service failed.
Jan 21 15:08:40 elk.novalocal systemd[1]: elasticsearch.service: main process exited, code=killed, status=9/KILL
Jan 21 15:08:40 elk.novalocal systemd[1]: Unit elasticsearch.service entered failed state.
Jan 21 15:08:40 elk.novalocal systemd[1]: elasticsearch.service failed.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.