I've been working on setting up ELK in AWS and been having this issue with Constantly increasing consumption of memory. I started out with 4GB then up to 10GB! But yes, it still consumed causing the logstash server to crash. restarting the server fixes the problem temporarily. But it just won't go away.
I have a basic input configuration using the cloudwatch logs input plugin. with 9 Log groups as inputs and uses logstash output plugins for elastic search. We use the elasticsearch server version 1.5 from AWS.
Hope you guys can give a hint on how to fix the problem thanks!
Yes, it is exceeding the heap settings that I configured on the logstash
server. I started with 2GB then 8GB and right now it's set to 10GB and it
is consumed in just a span of 30 mins.
That's not JVM heap though, that's just the OS using all the available memory it can to make it performant as possible.
Limiting that is outside the scope of these forums.
But restarting the logstash server, sets the memory back to normal value. I'm sure it has something to do with logstash server it self. Also logstash server is the only service running on the particular server.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.