Logstash JMX sizing

I've been using the Logstash JMX input succesfully for some time to monitor up to 10 Java JMX enabled processes with 1 Logstash process and nb_thread set to 4 (default).

However, I've recently configured a Logstash process which I gave a total of 88 Java JMX enabled processes to monitor.
I immediately ran into the following error :
"The time taken to retrieve metrics is more important than the retrieve_interval time set. you must adapt nb_thread, retrieve_interval to the number of jvm/metrics you want to retrieve."

I've been increasing the nb_thread up until 50 now, but the error remains.

I'm wondering:

  • Is 88 processes for one Logstash JMX (JMX + some filters + ES output) simply too much and should I be looking to have more Logstashes each handling less processes ?
  • Is this a Logstash limit or could it be an ES limit ? How could I detect if ES is too slow picking up the data in this case ?

Thanks in advance for any help or suggestions.


Hi did you get an answer for this ? I ran into the same problem today


No i never got any more feedback on this. In the meantime i did away with my idea to monitor those java processes with jmx for organisational reasons.

I was running 2.4.1 and logstash tracing was limited. I just switched to 5.1.1 which seems to have interesting metrics in kibana now.


any luck on this? I'm getting the same issue on my end using 5.2

I never went back to JMX monitoring for large volumes of Java processes and
my initial issues never got suitably resolved.

thanks for the response...I started a new thread on this here: https://discuss.elastic.co/t/logstash-jmx-input-slow/81531...hopefully I can find some new info as well....