Reg: Issues Monitoring logstash - XPack

Trying to alert when Logstash fails on any pipeline or sends data to dead letter queue.

Queries:

  1. How to find a Pipeline is not running from XPack/monitoring when it is not logging any detail if pipeline is wrongly configured for inputs/output
  2. How to find a Pipeline is sending data to dead-letter-queue <<This data is not captured in .monitoring indice>>

Environment Background:

Logstash 6.1.1 + Xpack
ES 6.1.1+ Xpack
Kibana 6.1.1+ Xpack
Logstash Pipeline setup (only pipeline): FileBeat -- {Redis -- grok -- date - ESoutput}

  • default .monitoring pipline that logstash internally start for sending monitoring data

Scenario and Observations

Scenario: Force fail ESoutput in pipeline by providing wrong DNS/IP, start Logstash.

The only type of data that came to .monitoring-logstash-6... indice is "logstash_state", which did not give any pipeline status/vertice status. Regardless, no error or failure info is captured here
Below error is observed in logstash log.
[2018-02-24T00:24:56,304][FATAL][logstash.runner ] An unexpected error occurred! {:error=>#<NoMethodError: undefined method <' for nil:NilClass>, :backtrace=>["/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.0.2-java/lib/logstash/outputs/elasticsearch/common.rb:213:inget_event_type'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.0.2-java/lib/logstash/outputs/elasticsearch/common.rb:165:in event_action_params'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.0.2-java/lib/logstash/outputs/elasticsearch/common.rb:39:inevent_action_tuple'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.0.2-java/lib/logstash/outputs/elasticsearch/common.rb:34:in block in multi_receive'", "org/jruby/RubyArray.java:2486:inmap'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.0.2-java/lib/logstash/outputs/elasticsearch/common.rb:34:in multi_receive'", "/usr/share/logstash/logstash-core/lib/logstash/output_delegator_strategies/shared.rb:13:inmulti_receive'", "/usr/share/logstash/logstash-core/lib/logstash/output_delegator.rb:50:in multi_receive'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:487:inblock in output_batch'", "org/jruby/RubyHash.java:1343:in each'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:486:inoutput_batch'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:438:in worker_loop'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:393:inblock in start_workers'"]}

Scenario: Force fail both ESOutput and Redis input viz. Made redis not reachable by giving wrong key

THe error as referenced above stopped, and below was observed in logstash log
[2018-02-24T01:21:36,301][ERROR][logstash.pipeline ] A plugin had an unrecoverable error. Will restart this plugin.
Pipeline_id:xxxxxxxxxxxxxxxxxxx
Plugin: <LogStash::Inputs::Redis host=>"xxxxxxxxxxxxx", password=>, port=>6381, data_type=>"list", key=>"xxxxxxxxxxxx", threads=>8, id=>"f7c67d2e217cfb37fc070f3aa51df4d2abfe0ab1fa82d566b7cf1d2220d21b0d", enable_metric=>true, codec=><LogStash::Codecs::JSON id=>"json_9d171ba3-499f-42fe-a184-287e003767c0", enable_metric=>true, charset=>"UTF-8">, db=>0, timeout=>5, batch_count=>125>
Error: closed stream
Exception: IOError
Stack: org/jruby/RubyIO.java:1364:in write_nonblock' ...... /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-input-redis-3.1.6/lib/logstash/inputs/redis.rb:189:inlist_batch_listener'
org/jruby/RubyMethod.java:119:in call' /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-input-redis-3.1.6/lib/logstash/inputs/redis.rb:175:inlist_runner'
org/jruby/RubyMethod.java:115:in call' /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-input-redis-3.1.6/lib/logstash/inputs/redis.rb:94:inrun'
/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:524:in inputworker' /usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:517:inblock in start_input'

Regardless - The issue are

  • No information regarding failure is captured as part of monitoring indices
  • The failures value is "0" for any logstash_stats that are captured i indice, but it is for reload though
  • CURL api giving error as below so cannot use to check what happened to pipeline
    curl -XGET 'localhost:9600/_node/stats/pipelines'
    {"status":500,"request_method":"GET","path_info":"/_node/stats/pipelines","query_string":"","http_version":"HTTP/1.1","http_accept":"/","error":"Unexpected Internal Error","class":"LogStash::Instrument::MetricStore::MetricNotFound","message":"For path: events. Map keys: [:pipelines, :reloads]","backtrace":["/usr/share/logstash/logstash-core/lib/logstash/instrument/metric_store.rb:225:in block in get_recursively'","org/jruby/RubyArray.java:1734:ineach'","/usr/share/logstash/logstash-core/lib/logstash/instrument/metric_store.rb:224:in get_recursively'","/usr/share/logstash/logstash-core/lib/logstash/instrument/metric_store.rb:235:inblock in get_recursively'","org/jruby/RubyArray.java:1734:in each'","/usr/share/logstash/logstash-core/lib/logstash/instrument/metric_store.rb:224:inget_recursively'","/usr/share/logstash/logstash-core/lib/logstash/instrument/metric_store.rb:95:in block in get'","org/jruby/ext/thread/Mutex.java:148:insynchronize'","/usr/share/logstash/logstash-core/lib/logstash/instrument/metric_store.rb:94:in get'","/usr/share/logstash/logstash-core/lib/logstash/instrument/metric_store.rb:108:inget_shallow'","/usr/share/logstash/logstash-core/lib/log

sometimes I get below as well, with logstash automatically restarting - and starts .monitoring... internal pipeline to send monitoring data.
curl -XGET 'localhost:9600/_node/stats/pipelines'
curl: (7) Failed connect to localhost:9600; Connection refused

Thanks for guidance.

Rechecking as I didn't any response. It would be helpful if someone can guide me on how to find logstash pipeline failure. For now, I'm planning to use the target Es Indice to monitor last arrived data to alert on failure, but it will work only if the data to indice is coming from only one instance.

I would suggest asking this question in the Logstash forum.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.