I am running Logstash 7.9.1 with 24 GB of maximum heap size configured in a Linux machine with 30 GB Memory and configured a pipeline to run a python program with Logstash exec plugin I see the below error message in logs.
If you use an exec plugin then the logstash JVM will fork. That creates a copy of the JVM's memory (excluding most shared memory segments). That means that on a 30GB server you will struggle to have the JVM memory be larger than 14 GB, so I would suggest a heap size of 12 GB at most. If you still get ENOMEM at 12 GB then keep reducing the heap size.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.