I have an instance of ELK in Ubuntu 16.04 (2GB of RAM, 30GB of HDD). I can setup visualisation, dashboard and all.
BUT, the elasticsearch instance keep dying after 20 minutes or so (Around 6000 row of input)
I've tried to add the memlock on /etc/security/limits.conf
elasticsearch soft memlock unlimited
elasticsearch hard memlock unlimited
still no luck.
Anyone can help me where can I start debugging things out? I've tried checking /var/log/elasticsearch/elasticsearch.log but nothing much on the "dying" part, I can only see that it started.
Hi, How much of that 2GB RAM is allocated in the /etc/elasticsearch/jvm.options? ..elasticsearch instance keep dying.. What exactly are you experiencing on the operating system itself?
By dying you mean the process is killed or unresponsive?
Does it only fail when you are feeding it new docs?
Are you using any unusual plugins? (e.g. I remember reading Zookeeper can call System.exit when unhappy).
I've checked the /etc/elasticsearch/jvm.options it shown:
-Xms1g
-Xmx1g
is it enough?
The process of elasticsearch just stopped running after a few minutes (15-20minutes). The other processes like Kibana, Nginx, Logstash is still running fine.
Since you're running Elasticsearch with a 1 GB heap on a machine with 2 GB of RAM, I suspect that your instance is being killed by the OS OOM killer. Check your kernel logs.
The immediate problem is running Elasticsearch, Logstash, Kibana, and nginx in a machine with 2 GB of RAM. Even if you drop the heap by half you're likely to still have trouble, and then you're more likely to run into heap space issues in Elasticsearch. I think you need to either get sone of those other processes off this host, or get more RAM.
Hi, this would very much depend on what you are expecting use this server for.
If this is a server purpose built for testing 5.1.1 then of course the resources you will look to have may suffice. This all goes hand in hand with what you are looking to achieve here.
In fact, I was thinking to make this a production ELK server. How do you guys normally judge the server requirement for ELK? based on number of docs coming in or?
Running all three on a single machine with only 4 GB might be too much, especially combined with an nginx server (it really depends on your use-case though). Elasticsearch loves the filesystem cache, but if all the memory not dedicated to the Elasticsearch heap is going to other processes, there is not going to be any room left over for the filesystem cache.
I've just trying to reduce the xms to 750mb and fortunately (finger crossed) the server has been running fine for a few days now. I am feeding it around 40k hits every 15mins or so.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.