Maybe a new input_type (like http) in FileBeat where it pulls for all/filtered JMX data or a separate Beat?
Logstash itself is too heavy to install as an agent on the servers where the applications run and the jmx input plugin in logstash might not scale that good to grab the JMX data from central logstash instances from hundreds of Java processes. IMHO, the data should be fetched locally and then fed into ES (via Logstash for filtering etc.)
There are no specific plans yet for JMX data. An interesting option here could be, that instead of pulling data from http, we could have an http that data can be pushed to instead of pulling. Like this the beat would not have to know all the details on how to fetch the data but it would be pushed by a tool. Not sure if something like that is supported by jolokia? The other option could be to push the data to stdin from which Filebeat is reading.
Was busy with testing RC2, nightly builds recently and can now come back to this question...
I was inspired by the nginx beat which uses the same approach to pull the information. Maybe it's more an architecture decision. IMHO, well defined monitoring interfaces to pull information like http, jmx should be handled by a beat implementation itself. Then you can avoid an additional daemon running on the system which feeds the data into your proposed generic beat with an http input.
@owulff Thanks for getting back and sharing your thoughts. I agree we should have as few "processes" running on the host as possible. Having a beat for this quite nice. If this is a common need it could also be that perhaps someone from the community picks it up.
I was looking also for a way to monitor our server farms with a beat. As there was no beat available I created the new general purpose Httpbeat. Httpbeat can call any HTTP server and send the result to the various beat outputs, e.g. logstash, elasticearch.
I'm currently using it to call Jolokia and Apache Stats.
The beat is available at https://github.com/christiangalsterer/httpbeat.
The current release is 1.0.0-beta1 and normal get/post works. There are no ready-made packages available (currently working on this) but it can be easily compiled with go.
In case of problems/questions/ideas please feel free to open an issue on github.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.