Hi there,
just want to confirm, is there any limit for JVM for logstash? if the JVM limit for elastic is 30 - 32 GB, does it also apply for logstash?
Thanks
Hi there,
just want to confirm, is there any limit for JVM for logstash? if the JVM limit for elastic is 30 - 32 GB, does it also apply for logstash?
Thanks
Hi,
You can checkout the docs for Logstash here: JVM settings | Logstash Reference [8.11] | Elastic
- The recommended heap size for typical ingestion scenarios should be no less than 4GB and no more than 8GB.
Best regards
Wolfram
ok, thank you. I'll check it
but do you know the ideal value for the JVM? For example in elastic, the ideal value is 30G(max) to make sure that the elastic node uses compressed OOPS and if you set more than that, it will cause some problems.
does logstash have the same condition as that too?
if I followed this point, does it mean if I have a logstash server that has 128Gb memory, does that mean I can configure JVM to 64 - 96 GB?
Do not increase the heap size past the amount of physical memory. Some memory must be left to run the OS and other processes. As a general guideline for most installations, don’t exceed 50-75% of physical memory. The more memory you have, the higher percentage you can use.
Thanks
I don't think this is advisable as the documentation states:
- The recommended heap size for typical ingestion scenarios should be no less than 4GB and no more than 8GB.
So you should check your memory usage carefully and increase it in small steps if necessary. But this depends on your requirements so I cannot really give recommandations here (we are currently using 4GB in production).
Logstash is more CPU bound than memory bound, the memory is used for the queue in memory (per default), inflight events and some other things like translate filter dictionaries.
You should follow the recommendation in the documentation, start with 4 GB
and increase if needed until 8 GB
.
I doubt that more than 8 GB
will make any difference, but if for some reason you start getting issues with 8 GB
it would be better to scale out with more nodes than to scale up a big node.
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.
© 2020. All Rights Reserved - Elasticsearch
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant logo are trademarks of the Apache Software Foundation in the United States and/or other countries.