How to check and fix Logstash performance

Hi All,

I am having several servers (windows) running filebeat to get log files and send events to the Logstash.
Logstash and Elasticsearch runs on the same Linux machine.

I have developed a monitor scheduled script on the windows servers, which every 5 minutes writes a line into a monitored file which filebeat reads it and sends data to the Logstash. The Logstash takes the data and insert into a specific index.

sometimes that heartbeat data is being analyzed by Logstash, after 2 or 3 minutes it was written to the monitored log file, which can be caused by an overloaded Logstash process.

Questions:

  1. How can I monitor the Performance of the Logstash and Indexing in Elasticsearch ?
  2. what are best practices for cases when number of Input plugins is huge.
    5 Servers of type I
    16 Servers of type II
    10 Servers of type III
    3 Servers of type IV

(*) Servers of type I, has many prospectors (15) in the Filebeat config.
Would like to have best practices for such a config.

Thanks,

Ori

  1. How can I monitor the Performance of the Logstash and Indexing in Elasticsearch ?

Use e.g. Marvel to monitor ES. For Logstash you can perhaps use the metrics plugin and a time-series database like Graphite. Logstash's new metrics APIs could also be of help.

  1. what are best practices for cases when number of Input plugins is huge.

Define "huge".

@magnusbaecki Hi could you please explain how the filter plugin 'metrics' indicate the output rate (msg/sec). I have seen the doc that describes '"[thing][rate_1m]" - the per-second event rate in a 1-minute sliding window' , when I get the 1m_rate with rate: 3458.8, does it mean the output performance is 3458.8 msg/second ? or it is just 3458.8 msg/min? Thank you very match

does it mean the output performance is 3458.8 msg/second ?

Yes, the average rate per second over the last minute.

thank you very much!