Hi,
I am using the following setup!
ES,Logstash,Kibana - 6.2.2
I am using the MetricBeat Jolokia module for forwarding the JMX details and installed the JMX as a webarchive in the app servers. Its returning the values while manually tested though the browser. Previously , the Jolokia was directly pushing the values to the ES,which gave errors. After a little R&D I read that using of logstash may help to resolve the Issue! I reconfigured the setup with the logstash. Now the jolokia is pushing the data to Logstash. I installed the "logstash-input-jmx" for logstash. But its not working. I am getting the following error from the logs!
/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-input-jmx-3.0.4/lib/logstash/inputs/jmx.rb:326:in run'_ _/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:516:in
inputworker'
/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:509:in block in start_input'_ _[2018-03-28T11:28:42,958][INFO ][logstash.pipeline ] Pipeline has terminated {:pipeline_id=>"main", :thread=>"#<Thread:0x66b439d7@/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:246 run>"}_ _[2018-03-28T11:29:08,730][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"fb_apache", :directory=>"/usr/share/logstash/modules/fb_apache/configuration"}_ _[2018-03-28T11:29:08,738][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"netflow", :directory=>"/usr/share/logstash/modules/netflow/configuration"}_ _[2018-03-28T11:29:09,331][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"6.2.2"}_ _[2018-03-28T11:29:09,488][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}_ _[2018-03-28T11:29:11,212][WARN ][logstash.outputs.elasticsearch] You are using a deprecated config setting "document_type" set in elasticsearch. Deprecated settings will continue to work, but are scheduled for removal from logstash in the future. Document types are being deprecated in Elasticsearch 6.0, and removed entirely in 7.0. You should avoid this feature If you have any questions about this, please visit the #logstash channel on freenode irc. {:name=>"document_type", :plugin=><LogStash::Outputs::ElasticSearch hosts=>[//10.10.114.149:9200], sniffing=>true, manage_template=>false, index=>"%{[@metadata][beat]}-%{+YYYY.MM.dd}", document_type=>"%{[@metadata][type]}", id=>"fec3d397b7a1d84dc83e836bfb1d650afc728e1ec5f8adaacd169af87ee0eec9", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_bee11109-358f-4e51-a8a1-a6782993ef47", enable_metric=>true, charset=>"UTF-8">, workers=>1, template_name=>"logstash", template_overwrite=>false, doc_as_upsert=>false, script_type=>"inline", script_lang=>"painless", script_var_name=>"event", scripted_upsert=>false, retry_initial_interval=>2, retry_max_interval=>64, retry_on_conflict=>1, action=>"index", ssl_certificate_verification=>true, sniffing_delay=>5, timeout=>60, pool_max=>1000, pool_max_per_route=>100, resurrect_delay=>5, validate_after_inactivity=>10000, http_compression=>false>}_ _[2018-03-28T11:29:11,316][INFO ][logstash.pipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}_ _[2018-03-28T11:29:11,949][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://10.10.114.149:9200/]}}_ _[2018-03-28T11:29:11,954][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://10.10.114.149:9200/, :path=>"/"}_ _[2018-03-28T11:29:12,071][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://10.10.114.149:9200/"}_ _[2018-03-28T11:29:12,123][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>nil}_ _[2018-03-28T11:29:12,123][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the
typeevent field won't be used to determine the document _type {:es_version=>6}_ _[2018-03-28T11:29:12,136][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//10.10.114.149:9200"]}_ _[2018-03-28T11:29:12,193][INFO ][logstash.inputs.jmx ] Create queue dispatching JMX requests to threads_ _[2018-03-28T11:29:12,194][INFO ][logstash.inputs.jmx ] Compile regexp for group alias object replacement_ _[2018-03-28T11:29:12,200][INFO ][logstash.inputs.jmx ] Initialize 4 threads for JMX metrics collection_ _[2018-03-28T11:29:12,206][INFO ][logstash.pipeline ] Pipeline started succesfully {:pipeline_id=>"main", :thread=>"#<Thread:0x7da6b600@/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:246 run>"}_ _[2018-03-28T11:29:12,222][INFO ][logstash.agent ] Pipelines running {:count=>1, :pipelines=>["main"]}_ _[2018-03-28T11:29:12,237][INFO ][logstash.inputs.jmx ] Loading configuration files in path {:path=>"/etc/logstash/jmxconf"}_ _[2018-03-28T11:29:12,238][ERROR][logstash.inputs.jmx ] No such file or directory - No such directory: /etc/logstash/jmxconf_ _[2018-03-28T11:29:12,239][ERROR][logstash.inputs.jmx ] org/jruby/RubyDir.java:146:in
initialize'
org/jruby/RubyDir.java:383:in `foreach'
Is there any JMX module for Logstash that I am missing?
Request to advise
Thanks in advance
Vishnu