Kibana complaining that "metricbeat-*" is not indexed

Hi - although I have created an index manually called metricbeat, when I try to bring up any visualizations associated with metricbeat or perform a discovery on metricbeat-* I received the following error in Kibana (4.4.1)

[index_not_found_exception] no such index, with: {"index":"[metricbeat-*]"}

I have metricbeat logging to logstash which in turn is piping the data over to elasticsearch. Has anyone ran into this issue and do you have a resolution. I have also successfully uploaded the beats-dashboards-5.0.0/ metricbeats json.

An index called metricbeat will not be matched by the pattern metricbeat-*.

I recommend following the Getting Started guide for Metricbeat. There is a section in the docs that describes how to use Metricbeat with Logstash and what the LS config should be. When using Metricbeat with LS you also need to manually install the Metricbeat index template (this is the getting started guide).

Looks like it did create the index metricbeat-*. I can see it in kibana but when I try to view or build a visual against it it throws that error back at me that I listed in the thread. If I execute http://:9200/_cat/indices?pretty it show "metricbeat" but it shows no file count, it has a zero beside it. I was able to run the curl XPUT command and load the templete, I was able to successfully upload the beats/metricbeats dashboards, index patterns, search, and visualization(s) via the import script. It's just not populating any information into logstash that I can tell and then of course, the error I listed pops up say the index is not found. I seems like up until that point I had pressed all the right buttons but it's not populating the index with anything and/or kibana just want recognize it even though it shows everything listed correctly.

Please share your Metricbeat and Logstash configs. And can you post what's in the Metricbeat log output.

Thanks Andrew: Here is a snippet of the metricbeat output running:
2016/11/22 18:24:31.011025 beat.go:264: INFO Home path: [/home/ag03655/metricbeat-5.0.0-linux-x86_64] Config path: [/home/ag03655/metricbeat-5.0.0-linux-x86_64] Data path: [/home/ag03655/metricbeat-5.0.0-linux-x86_64/data] Logs path: [/home/ag03655/metricbeat-5.0.0-linux-x86_64/logs]
2016/11/22 18:24:31.011086 beat.go:174: INFO Setup Beat: metricbeat; Version: 5.0.0
2016/11/22 18:24:31.011145 logp.go:219: INFO Metrics logging every 30s
2016/11/22 18:24:31.011178 logstash.go:90: INFO Max Retries set to: 3
2016/11/22 18:24:31.011274 outputs.go:106: INFO Activated logstash as output plugin.
2016/11/22 18:24:31.011364 publish.go:291: INFO Publisher name: ag03sdcla00801.dcsouth.tenn
2016/11/22 18:24:31.011636 async.go:63: INFO Flush Interval set to: 1s
2016/11/22 18:24:31.011650 async.go:64: INFO Max Bulk Size set to: 2048
2016/11/22 18:24:31.011735 metricbeat.go:25: INFO Register [ModuleFactory:[system], MetricSetFactory:[apache/status, haproxy/info, haproxy/stat, mongodb/status, mysql/status, nginx/stubstatus, postgresql/activity, postgresql/bgwriter, postgresql/database, redis/info, redis/keyspace, system/core, system/cpu, system/diskio, system/filesystem, system/fsstat, system/load, system/memory, system/network, system/process, zookeeper/mntr]]
2016/11/22 18:24:31.113507 beat.go:204: INFO metricbeat start running.
2016/11/22 18:25:01.011417 logp.go:230: INFO Non-zero metrics in the last 30s: fetches.system-memory.success=3 fetches.system-load.events=3 fetches.system-memory.events=3 fetches.system-process.events=821 fetches.system-cpu.events=3 fetches.system-filesystem.success=3 libbeat.publisher.published_events=965 fetches.system-filesystem.events=126 fetches.system-network.events=9 fetches.system-process.success=3 libbeat.logstash.publish.write_bytes=74459 libbeat.logstash.published_and_acked_events=965 fetches.system-load.success=3 libbeat.publisher.messages_in_worker_queues=965 fetches.system-cpu.success=3 libbeat.logstash.call_count.PublishEvents=3 libbeat.logstash.publish.read_bytes=384 fetches.system-network.success=3
2016/11/22 18:25:31.011411 logp.go:230: INFO Non-zero metrics in the last 30s: libbeat.publisher.published_events=966 fetches.system-process.success=3 libbeat.logstash.publish.write_bytes=71640 libbeat.logstash.publish.read_bytes=150 fetches.system-cpu.events=3 fetches.system-load.events=3 fetches.system-load.success=3 fetches.system-filesystem.events=126 libbeat.logstash.call_count.PublishEvents=3 fetches.system-filesystem.success=3 fetches.system-network.success=3 fetches.system-network.events=9 fetches.system-process.events=822 libbeat.logstash.published_and_acked_events=966 fetches.system-cpu.success=3 fetches.system-memory.events=3 fetches.system-memory.success=3 libbeat.publisher.messages_in_worker_queues=966

I looks like it's sending data but I could be mistaken.
My LogStash config file is:

input {
file {
type => "syslog"
path => ["/var/log/secure", "/var/log/messages"]
exclude => ["*.gz"]
}

udp {
port => 25826
buffer_size => 1452
codec => collectd { }
}

beats
{
port => 5044
}

file {
type => "jboss"
path => ["/var/log/jbossas/eportalhost/servers/*/server.log"]
start_position => beginning
codec => multiline {
pattern => "^%{TIMESTAMP_ISO8601} "
negate => true
what => "previous"
}
}
file {
type => "jboss_host"
path => ["/var/log/jbossas/eportalhost/host-controller.log"]
start_position => beginning
codec => multiline {
pattern => "^%{TIMESTAMP_ISO8601} "
negate => true
what => "previous"
}
}
file {
type => "jboss_process"
path => ["/var/log/jbossas/eportalhost/process-controller.log"]
start_position => beginning
}
file {
type => "logstash_error"
path => ["/var/log/logstash/logstash.err"]
start_position => beginning
}
file {
type => "logstash_error"
path => ["/var/log/logstash/logstash.err"]
start_position => beginning
}
file {
type => "jon_agent"
path => ["/var/log/jboss-on/agent/agent.log"]
start_position => beginning
}
}
output {
stdout { }
redis {
host => [ ":6379", ":6379" ]
shuffle_hosts => true
data_type => "list"
key => "logstash"
}

So, as you can see, I may have left out the fact that logstash is pushing to two REDIS servers and from the REDIS servers there is a central standalone logstash server pulling everything from REDIS and pushing to elasticsearch. That can be a little complicated to try and visualize, i know. This was in place when I came on board and I really do not understand the importance of REDIS in the mix. But that is a different Topic of discussion I suppose.

It looks like the data is being ack'ed by that Logstash. What's the config look like on the Logstash that pulls from Redis? (if you surround the config with three backticks at the start and end it will format nicely).

Cool - You're going to love the simplicity of the centralized logstash standalone server that pull from the REDIS servers:

input {
  redis {
        host => "<ServerIp>"
        type => "redis-input"
        port => "6379"
        data_type => "list"
        key => "logstash"
  }
   redis {
        host => "<ServerIp>"
        type => "redis-input"
        port => "6379"
        data_type => "list"
        key => "logstash"
  }
#  syslog {
#       type => syslog
#       port => 5514
#  }
}
output {
  stdout { }
  elasticsearch {
        hosts => ["<ServerIp>:9200"]
        user =>  "<username>"
        password =>  "<password>"
  }
}

So it is basically pulling All the info from REDIS and finally piping it over to our ElasticSearch server.

Im not sure if I know what you mean by (if you surround the config with three backticks at the start and end it will format nicely). But it sounds promising.

scratch that

syslog {
type => syslog
port => 5514
}

That is actually commented out so the format of this messaging system thinks I meant to make it bold because of the pound sign

Based on the elasticsearch output config you posted, the events are not being routed to the proper index. The default index is "logstash-%{+YYYY.MM.dd}".

Can you print out one of the Metricbeat events coming from redis with:

output {
  stdout { 
    codec  => rubydebug {
      metadata => true
    }
  }
}

Then I can advise you on a conditional output configuration.

Awesome. Would I add that entry into the Centralized Logstash config - or where would be the best place to have that execute. Im not 100% sure if that would replace the current "output" entry in the shipper.conf file for logstash and/or what server it needs to execute on

Just briefly modify the config on the Logstash instance that pulls from Redis. And post one of the Metricbeat events that's coming through Redis.

output {
  stdout { codec => rubydebug { metadata => true } }
  elasticsearch {
        hosts => ["<ServerIp>:9200"]
        user =>  "<username>"
        password =>  "<password>"
  }
}

I think this should work. It basically routes all Metricbeat events to the metricbeat-YYYY.MM.dd index based on the fact that the type is metricsets. I don't think the events in Redis will have @metadata so we can't use that to route on (that's why I was asking for an event to be output).

output {
  stdout { }

  if [type] == "metricsets" {
    elasticsearch {
      hosts => "elasticsearch:9200"
      manage_template => false
      index => "metricbeat-%{+YYYY.MM.dd}"
      document_type => "%{[type]}"
      user =>  "<username>"
      password =>  "<password>"
    }
  } else {
    elasticsearch {
      hosts => ["elasticsearch:9200"]
      user =>  "<username>"
      password =>  "<password>"
    }
  }
}

FYI: Metricbeat can write directly to Redis if you want to bypass that first round of Logstash servers. See Redis output.

ok it's in there now and I restarted logstash. How can I view the event that comes through? It did however give me a different error when I tried to view the metricbeat-* index:
Courier Fetch: [index_not_found_exception] no such index, with: {"index":"[metricbeat-*]"}

Also, I will try your suggested config change with the if condition and let you know.

I'll have to revisit this in the morning Andrew, I locked up the logstash system id with the new config settings. (I think) . I may have already locked it out before and didnt know it from an eariler run. I'll touch base tomorrow. Thank you for you help so far on this. W

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.