Kibana no longer finds @timestamp in logstash-*

I have a dashboard that states "Could not locate that index-pattern-field (id: @timestamp)" for my logstash-* index pattern. I have tried the following troubleshooting:

  1. I had a problematic index that was creating massive numbers of fields- faulty kv, so I -XDELETED the entire all indices from that data source
  2. I cleared the cache for fielddata on logstash-*
  3. Checked the field_capabilities for the logstash-* (it looks identical for timestamp to other indices)
  4. Checked the mappings of all my indices under logstash-* for field @timestamp (using curl on "http://localhost:5601/elasticsearch/logstash-/_mapping/field/?_=1488831330559^&ignore_unavailable=false&allow_no_indices=false^&include_defaults=true" ) (all of them showed _timestamp/@timestamp as a date field)
  5. Verified via the firefox network inspector that my /elasticsearch/logstash-/_mapping/field/?_=1488836970871&ignore_unavailable=false&allow_no_indices=false&include_defaults=true (which is a large 9.4MB) and field_capabilities queries are coming back AND that they have a @timestamp field.
  6. Delete the index-pattern logstash-* (and now I can't recreate it because @timestamp no longer can be found)
  7. Restarted kibana, elasticsearch on the server I'm searching on, nginx (reverse proxy for authentication)

At this point I believe the problem is that across all my indices (17k indices roughly) there are enough fields that the @timestamp gets drowned out. Can someone enlighten me as to what I should do to fix this? I'm sure I have plenty of superfluous fields that I could eliminate, I have a very hierarchical logstash index pattern model based on what type of data source, and subtype the data source matches, e.g.:

logstash-*
logstash-a-*
logstash-a-z-*
logstash-a-y-*
logstash-a-x-*
logstash-b-s-*
logstash-b-t-*
logstash-b-x-*
etc...

and would like to be able to either perform extremely fine grain queries or searches against the whole.

What could have caused my @timestamp field to no longer be located in logstash-* when it appears to still be a date field across all my indices and still valid?

I am running Kibana 5.1.1 with logstash 5.1.1 (I think) and elastic search 5.1.1

I am not sure why you timestamp field can not be found, but do have some questions about your indexing strategy. Why do you have 17k indices? How many shards does that correspond to? What is the average shard size?

Christian,

Thanks for your reply, I was wrong about the indices count- I have 3,442 indices. This ELK stack is serving as our enterprise log search resource, and I am throwing about 20-30 different logging sources against it. I have run the following commands, with the following results:

curl localhost:9200/_cat/indices 2>/dev/null | wc -l
3442

curl localhost:9200/_cluster/health?pretty
{
"cluster_name" : "elk-stack",
"status" : "green",
"timed_out" : false,
"number_of_nodes" : 10,
"number_of_data_nodes" : 10,
"active_primary_shards" : 17206,
"active_shards" : 34412,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 0,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number" : 100.0
}

How would I determine the average size of shards?

That is way to many shards for a cluster that size. I would recommend consolidating you indices and/or reducing the shard count per index to dramatically reduce the number of shards. It looks like you have the default 5 shards per index, so for smaller indices it may make sense to reduce this to 1. You may also be able to use the shrink index API to reduce the number of primary shards per existing index to 1.

Christian,

I don't disagree with you about the need to shrink the number of shards per index. That being said, I don't see how this will effect kibana. Like I said earlier, I used network inspector on firefox and also verified with CURL... I'm getting back the timestamp field from both of these queries in kibana:

/elasticsearch/logstash-/_mapping/field/?=1488836970871&ignoreunavailable=false&allow_no_indices=false&include_defaults=true
/field_capabilities

But it's not showing any date/time fields in the pulldown menu for creating the index pattern:

This may be a clue to what's going on- a while ago I had to switch from firefox to chrome to create indices because of this error when I tried to do it on firefox (this doesn't happen on chrome):

Christian,

An update- I think you were on the right track with the problem in Kibana being related to shard utilization. When I deleted very old indices, and my shard count decreased to ~7k, my problems disappeared overnight.

I'm now investigating an efficient way to do a three tier rollover. I'm thinking of first 7 days of logs across all indices to stay on a 5 shard model, logs up until 30 days to be on a 2 shard model, and any older logs to be on 1 shard. Assuming I have Xpack licensing (or not), are there any good configuration examples of shard management out there?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.