Logstash again is failing to create monitoring event


(Sergey Ivanov) #1

Hi,
I have elastic-5.4.0 (elasticsearch, logstash, and kibana) all with x-pack, and trying to get the last piece working: getting some events from logstash.
Started with --log.level debug, I see the following in logstash log:

[2017-05-09T13:43:31,826][DEBUG][logstash.pipeline ] Pushing flush onto pipeline
[2017-05-09T13:43:32,092][DEBUG][logstash.inputs.metrics ] Metrics input: received a new snapshot {:created_at=>2017-05-09 13:43:32 -0400, :snapshot=>#<LogStash::Instrument::Snapshot:0xed3eaa7 @metric_store=#<LogStash::Instrument::MetricStore:0x6941017 @store=#<Concurrent::map:0x6934186d @default_proc=nil>, @structured_lookup_mutex=#Mutex:0x111ec328, @fast_lookup=#<Concurrent::map:0x2bcfd3a1 @default_proc=nil>>, @created_at=2017-05-09 13:43:32 -0400>}
[2017-05-09T13:43:32,094][ERROR][logstash.inputs.metrics ] Failed to create monitoring event {:message=>"For path: events", :error=>"LogStash::Instrument::MetricStore::MetricNotFound", :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/instrument/metric_store.rb:225:in get_recursively'", "org/jruby/RubyArray.java:1613:ineach'", "/usr/share/logstash/logstash-core/lib/logstash/instrument/metric_store.rb:224:in get_recursively'", "/usr/share/logstash/logstash-core/lib/logstash/instrument/metric_store.rb:235:inget_recursively'", "org/jruby/RubyArray.java:1613:in each'", "/usr/share/logstash/logstash-core/lib/logstash/instrument/metric_store.rb:224:inget_recursively'", "/usr/share/logstash/logstash-core/lib/logstash/instrument/metric_store.rb:95:in get'", "org/jruby/ext/thread/Mutex.java:149:insynchronize'", "/usr/share/logstash/logstash-core/lib/logstash/instrument/metric_store.rb:94:in get'", "/usr/share/logstash/logstash-core/lib/logstash/instrument/metric_store.rb:108:inget_shallow'", "/usr/share/logstash/logstash-core/lib/logstash/instrument/metric_store.rb:157:in extract_metrics'", "org/jruby/RubyArray.java:1613:ineach'", "org/jruby/RubyEnumerable.java:852:in inject'", "/usr/share/logstash/logstash-core/lib/logstash/instrument/metric_store.rb:133:inextract_metrics'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/x-pack-5.4.0-java/lib/monitoring/inputs/metrics.rb:191:in format_global_event_count'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/x-pack-5.4.0-java/lib/monitoring/inputs/metrics.rb:80:inbuild_event'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/x-pack-5.4.0-java/lib/monitoring/inputs/metrics.rb:60:in update'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/x-pack-5.4.0-java/lib/monitoring/inputs/metrics.rb:35:inconfigure_snapshot_poller'", "org/jruby/RubyProc.java:281:in call'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/concurrent-ruby-1.0.0-java/lib/concurrent/executor/safe_task_executor.rb:24:inexecute'", "com/concurrent_ruby/ext/SynchronizationLibrary.java:174:in synchronize'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/concurrent-ruby-1.0.0-java/lib/concurrent/executor/safe_task_executor.rb:19:inexecute'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/concurrent-ruby-1.0.0-java/lib/concurrent/timer_task.rb:307:in execute_task'", "org/jruby/RubyProc.java:281:incall'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/concurrent-ruby-1.0.0-java/lib/concurrent/executor/safe_task_executor.rb:24:in execute'", "com/concurrent_ruby/ext/SynchronizationLibrary.java:174:insynchronize'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/concurrent-ruby-1.0.0-java/lib/concurrent/executor/safe_task_executor.rb:19:in execute'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/concurrent-ruby-1.0.0-java/lib/concurrent/ivar.rb:170:insafe_execute'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/concurrent-ruby-1.0.0-java/lib/concurrent/scheduled_task.rb:285:in process_task'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/concurrent-ruby-1.0.0-java/lib/concurrent/executor/timer_set.rb:157:inprocess_tasks'", "org/jruby/RubyProc.java:281:in call'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/concurrent-ruby-1.0.0-java/lib/concurrent/executor/java_executor_service.rb:94:inrun'", "Concurrent$$JavaExecutorService$$Job_754842726.gen:13:in `run'"]}
[2017-05-09T13:43:36,825][DEBUG][logstash.pipeline ] Pushing flush onto pipeline

configs:

cat /etc/logstash/logstash.yml


path.data: "/var/lib/logstash"
path.config: "/etc/logstash/conf.d"
path.logs: "/var/log/logstash"
xpack:
monitoring:
elasticsearch:
username: logstash_system
password: SecretPassword
pipeline:
batch:
size: 25
delay: 5

cat /etc/logstash/conf.d/basic_ls_config

input {
udp {
port => 2055
codec => netflow
}
}
output {
elasticsearch {
hosts => 127.0.0.1
user => logstash_internal
password => AnotherSecretPassword
}
}

I can authenticate with both logstash users, both internal and system:

curl -u logstash_system:SecretPassword 'http://localhost:9200/_xpack/security/_authenticate?pretty=true'

{
"username" : "logstash_system",
"roles" : [
"logstash_system"
],
"full_name" : null,
"email" : null,
"metadata" : {
"_reserved" : true
},
"enabled" : true
}

curl -u logstash_internal:AnotherSecretPassword localhost:9200/_xpack/security/_authenticate?pretty=true

{
"username" : "logstash_internal",
"roles" : [
"logstash_writer"
],
"full_name" : null,
"email" : null,
"metadata" : { },
"enabled" : true
}

But I can not GET localhost:9200/_xpack/monitoring:

curl -u elastic 'http://localhost:9200/_xpack/monitoring/?pretty=true'

Enter host password for user 'elastic':
{
"error" : {
"root_cause" : [
{
"type" : "illegal_argument_exception",
"reason" : "No endpoint or operation is available at [monitoring]"
}
],
"type" : "illegal_argument_exception",
"reason" : "No endpoint or operation is available at [monitoring]"
},
"status" : 400
}

Please, suggest, how can I debug and fix this problem.

Regards,
Sergey


(Sergey Ivanov) #2

I have changed output to be 'stdout{ codec => rubydebug }' and getting the same errors about input metrics. What is wrong here with logstash? Is it netflow? Will try now some other input.


(Jordan Sissel) #3

This looks like a bug in logstash x-pack.


(Andrew Cholakian) #4

@seriv to build on what Jordan's saying, this is a bug in xpack's monitoring code. This shouldn't affect your own pipeline's code.

Are there any other errors before or after this one in the log?


(Sergey Ivanov) #5

Sorry, I was wrong: i missed the difference in command-line arguments --path.settings and --path.config to the logstash. Now tried all 4 combinations:

  1. input { stdin {} } output {stdout {codec => rubydebug}}
  2. input { udp { port => 2055, codec => netflow } output {stdout {codec => rubydebug}}
  3. input { stdin {} output { elasticsearch { hosts => 127.0.0.1, user => logstash_internal ,password => Secret }
  4. input { udp { port => 2055, codec => netflow } output { elasticsearch { hosts => 127.0.0.1, user => logstash_internal ,password => Secret }
    First two were working fine, while 3 and 4 did not work. So, codec netflow and input udp are cleared, the problem is in my configuration for connection to elasticsearch. Strange that the easiest way to see the problem is to try curling URL 'http://127.0.0.1:9200/_xpack/monitoring/?pretty=true', with even superuser privileges I got error:
    {
    "error" : {
    "root_cause" : [
    {
    "type" : "illegal_argument_exception",
    "reason" : "No endpoint or operation is available at [monitoring]"
    }
    ],
    "type" : "illegal_argument_exception",
    "reason" : "No endpoint or operation is available at [monitoring]"
    },
    "status" : 400
    }

(Jordan Sissel) #6

Is x-pack monitoring enabled on your Elasticsearch cluster?


(Andrew Cholakian) #7

No, it appears that there's something about that exact config that kills xpack.

Running bin/logstash -e 'input { udp { port => 2055, codec => netflow } output { elasticsearch { hosts => 127.0.0.1, user => logstash_internal ,password => Secret } ' with xpack will repro.


(Andrew Cholakian) #8

The problem is that there are two errors. @seriv 's last config is broken. The first log line is:

[2017-05-09T19:18:15,431][ERROR][logstash.agent ] Cannot create pipeline {:reason=>"Expected one of #, {, ., } at line 1, column 27 (byte 27) after input { udp { port => 2055"}

However, the metrics pipeline still starts, despite the main pipeline not starting, and then logs

Failed to create monitoring event {:message=>"For path: events", :error=>"LogStash::Instrument::MetricStore::MetricNotFound"}

This is just confusing UX.

I propose that Logstash should exhibit the following behaviors to fix this.

  1. Logstash should just die if the config given is invalid and config reloading is not enabled.
  2. The metrics pipeline should not be dependent on other pipelines existing or not.

@jordansissel WDYT?


(Sergey Ivanov) #9

Thanks!
problem fixed by enclosing 127.0.0.1 with single quotes!


(Sergey Ivanov) #10

Although, there was another small issue. lagstash_internal user belongs to logstash_writer group as per examples in the documentation and thus have a right to create indexes with logstash-* prefix, but in the logs:
[WARN ][logstash.outputs.elasticsearch] Failed action. {:status=>404, :action=>["index", {:_id=>nil, :_index=>"logstash-2017.05.10", :_type=>"logs", :_routing=>nil}, 2017-05-10T01:44:52.491Z 128.8.127.153 %{message}], :response=>{"index"=>{"_index"=>"logstash-2017.05.10", "_type"=>"logs", "_id"=>nil, "status"=>404, "error"=>{"type"=>"index_not_found_exception", "reason"=>"no such index and [action.auto_create_index] ([.security,.monitoring*,.watches,.triggered_watches,.watcher-history*]) doesn't match", "index_uuid"=>"na", "index"=>"logstash-2017.05.10"}}}
Apperantly, a line in elasticsearch.yaml explicitly permits auto-creation of indexes, like:
'action.auto_create_index' => '.security,.monitoring*,.watches,.triggered_watches,.watcher-history*',
is actually prohibiting any other index to be created. Strange, is not it?

Sergey.


(Ravi Gude) #11

I am having the same issue. is there a solution on this ?

[2017-05-18T11:29:49,809][ERROR][logstash.inputs.metrics ] Failed to create monitoring event {:message=>"For path: events", :error=>"LogStash::Instrument::MetricStore::MetricNotFound"}

path.data: /var/lib/logstash
xpack.monitoring.elasticsearch.url: "url:9200"
xpack.monitoring.elasticsearch.username: "elastic"
##tried default logstash_writer
xpack.monitoring.elasticsearch.password: "changeme"

example conf file

input {
beats{
port=> 5000
}

heartbeat {
interval => 30
type => 'heartbeat'
enable_metric => false
}
}

output {
elasticsearch {
hosts => ["url:9200"]
index => "logstash-health-%{+YYYY.MM.dd}"
user => logstash_internal
password => whatever
}
}


(Crafty Technologies, Inc) #12

I'm in a similar situation. I'm using the config above for testing purposes and I'm getting the same result.

[ERROR][logstash.inputs.metrics ] Failed to create monitoring event {:message=>"For path: events", :error=>"LogStash::Instrument::MetricStore::MetricNotFound"}

Does anyone have elk 5.4 working w/ x-pack? it seems everyone is having the same issue.


#13

the same for me : " icStore::MetricNotFound"}
[2017-05-26T09:26:41,001][ERROR][logstash.inputs.metrics ] Failed to create mon
itoring event {:message=>"For path: events", :error=>"LogStash::Instrument::Metr
icStore::MetricNotFound"}"


(krishna_gaddipati) #14

+1 Having the same issue. was anyone successful in fixing this issue?


LogStash::Instrument::MetricStore::MetricNotFound
(system) #15

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.