I have a dashboard that states "Could not locate that index-pattern-field (id: @timestamp)" for my logstash-* index pattern. I have tried the following troubleshooting:
- I had a problematic index that was creating massive numbers of fields- faulty kv, so I -XDELETED the entire all indices from that data source
- I cleared the cache for fielddata on logstash-*
- Checked the field_capabilities for the logstash-* (it looks identical for timestamp to other indices)
- Checked the mappings of all my indices under logstash-* for field @timestamp (using curl on "http://localhost:5601/elasticsearch/logstash-/_mapping/field/?_=1488831330559^&ignore_unavailable=false&allow_no_indices=false^&include_defaults=true" ) (all of them showed _timestamp/@timestamp as a date field)
- Verified via the firefox network inspector that my /elasticsearch/logstash-/_mapping/field/?_=1488836970871&ignore_unavailable=false&allow_no_indices=false&include_defaults=true (which is a large 9.4MB) and field_capabilities queries are coming back AND that they have a @timestamp field.
- Delete the index-pattern logstash-* (and now I can't recreate it because @timestamp no longer can be found)
- Restarted kibana, elasticsearch on the server I'm searching on, nginx (reverse proxy for authentication)
At this point I believe the problem is that across all my indices (17k indices roughly) there are enough fields that the @timestamp gets drowned out. Can someone enlighten me as to what I should do to fix this? I'm sure I have plenty of superfluous fields that I could eliminate, I have a very hierarchical logstash index pattern model based on what type of data source, and subtype the data source matches, e.g.:
and would like to be able to either perform extremely fine grain queries or searches against the whole.
What could have caused my @timestamp field to no longer be located in logstash-* when it appears to still be a date field across all my indices and still valid?
I am running Kibana 5.1.1 with logstash 5.1.1 (I think) and elastic search 5.1.1