is there any way to specify number of filter workers from configuration file. ?
Right now I am using "service logstash start" command to start logstash, can we pass somehow pass number of worker threads to this command?
Thanks
Harsha
is there any way to specify number of filter workers from configuration file. ?
Right now I am using "service logstash start" command to start logstash, can we pass somehow pass number of worker threads to this command?
Thanks
Harsha
is there any way to specify number of filter workers from configuration file. ?
No, sorry.
Right now I am using "service logstash start" command to start logstash, can we pass somehow pass number of worker threads to this command?
Change /etc/default/logstash or /etc/sysconfig/logstash or wherever Logstash's startup arguments are configured on your system.
Thanks for quick response.
I didnt see any configuration specific to worker threads in /etc/default/logstash, Can you please tell me the line which I need to add to specify filter worker threads or I will be thankful if you can point me to some resource which has these info?
There's no separate variable for the number of workers it but you can use LS_OPTS to pass extra arguments to Logstash (-w
or --filterworkers
in this case).
Thanks Magnus for immediate response
What affect does changing the number of workers in Logstash have on it's performance?
Increasing the number of pipeline workers is likely to increase the throughput at the cost of higher CPU utilization, but it depends on the specifics.
But suppose I decrease the number of workers, will it have any issues such
as memory leak or inability to parse multiline logs?
Your Logstash instance will most likely have less capacity (i.e. lower throughput) but there won't be any problems with memory leaks or parsing of multiline logs. Why would it?
The option is now called --pipeline-workers
-- I'm sure, Magnus knows this, but this page keeps coming up on top of search-results for the question, so I thought, I'll point this out for searchers like myself.
The default number is derived from the number of CPU-cores in the host, which is usually too small if any networking (other than through localhost
) is involved. If your LS-installation feeds a remote destination -- be it Elasticsearch, or Graphite, or anything running on a separate machine, you should start with the number of workers at twice the number of processors.
© 2020. All Rights Reserved - Elasticsearch
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant logo are trademarks of the Apache Software Foundation in the United States and/or other countries.