Failed to create monitoring event

after configuring the X-PACK logstash is not processing the the logs

configuration of logstash

input {
tcp {
port => 5044
charset => "ISO-8859-1"

filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:[%{POSINT:syslog$
add_field => [ "received_at", "%{@timestamp}" ]
add_field => [ "received_from", "%{host}" ]
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]

output {
stdout { codec => rubydebug }
elasticsearch { hosts => ["localhost:9200"] }

and the error which logstash is giving i.e.

[2017-02-13T17:32:17,649][ERROR][logstash.inputs.metrics ] Failed to create monitoring event {:message=>"For path: events", :error=>"LogStash::Instrument::MetricStore::MetricNotFound"}

before X-pack everything was working fine

Did you add this property to elasticsearch.yml file?

action.auto_create_index: .security,.monitoring*,.watches,.triggered_watches,.watcher-history*

thanks for your response krishna

yes I've added this my elasticsearch.yml file is :slight_smile :

======================== Elasticsearch Configuration =========================

NOTE: Elasticsearch comes with reasonable defaults for most settings.

Before you set out to tweak and tune the configuration, make sure you

understand what are you trying to accomplish and the consequences.

The primary way of configuring a node is via this file. This template lists

the most important settings you may want to configure for a production cluster.

Please consult the documentation for further information on configuration options:

---------------------------------- Cluster -----------------------------------

Use a descriptive name for your cluster: my-application

------------------------------------ Node ------------------------------------

Use a descriptive name for the node: node-1

Add custom attributes to the node:

node.attr.rack: r1

----------------------------------- Paths ------------------------------------

Path to directory where to store the data (separate multiple locations by comma): /path/to/data

Path to log files:

#path.logs: /path/to/logs

----------------------------------- Memory -----------------------------------

Lock the memory on startup:

bootstrap.memory_lock: true

Make sure that the heap size is set to about half the memory available

on the system and that the owner of the process is allowed to use this


Elasticsearch performs poorly when the system is swapping the memory.

---------------------------------- Network -----------------------------------

Set the bind address to a specific IP (IPv4 or IPv6): localhost

Set a custom port for HTTP:

http.port: 9200

For more information, consult the network module documentation.

--------------------------------- Discovery ----------------------------------

Pass an initial list of hosts to perform discovery when new node is started:

The default list of hosts is ["", "[::1]"] ["host1", "host2"]

Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1):

discovery.zen.minimum_master_nodes: 3

For more information, consult the zen discovery module documentation.

---------------------------------- Gateway -----------------------------------

Block initial recovery after a full cluster restart until N nodes are started:

gateway.recover_after_nodes: 3

For more information, consult the gateway module documentation.

---------------------------------- Various -----------------------------------

Require explicit names when deleting indices:

action.destructive_requires_name: true

action.auto_create_index: .security,.monitoring*,.watches,.triggered_watches,.watcher-history*

"NOTE-----the bold character is hashed"
but still the same error is coming

Did you can any message at logstash startup ? If you got something like "[2017-02-...][ERROR][logstash.agent ] fetched an invalid config { ... }" and then you got this message, this could mean you ran into the same error as discussed here : Logstash Failed to create monitoring event

If so, you should fix your config, and try again. This message should disappear.

this is my kibana monitoring dashboard

and this is the logstash service status msg . it is running

I think the logstash configuration is fine because I didn't make any changes in that

and the error msg is showing like

"[2017-02-14T15:00:10,264][WARN ][logstash.outputs.elasticsearch] Failed action. {:status=>404, :action=>["index", {:_id=>nil, :_index=>"logstash-2017.02.14", :_type=>"syslog", :_routing=>nil}, 2017-02-14T09:30:35.000Z shubhrant Feb 14 15:00:35 shubhrant kernel: [19773.952596] [UFW BLOCK] IN=enp0s25 OUT= MAC=01:00:5e:00:00:fc:6c:c2:17:ee:78:21:08:00 SRC= DST= LEN=50 TOS=0x00 PREC=0x00 TTL=1 ID=26941 PROTO=UDP SPT=54248 DPT=5355 LEN=30 ], :response=>{"index"=>{"_index"=>"logstash-2017.02.14", "_type"=>"syslog", "_id"=>nil, "status"=>404, "error"=>{"type"=>"index_not_found_exception", "reason"=>"no such index", "resource.type"=>"index_expression", ""=>"logstash-2017.02.14", "index_uuid"=>"na", "index"=>"logstash-2017.02.14"}}}} "

I remembered solving this by appending logstash* to action.auto_create_index parameter. May be that could solve.

thanks krishna But after appending logstash* the elastic service is getting stop, it is not starting

is there anybody to help in this ............

please :frowning:

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.