Kibana not showing recent Elasticsearch data

Hello,

I just upgraded my ELK stack but now I am unable to see all data in Kibana. I see data from a couple hours ago but not from the last 15min or 30min. It's like it just stopped. After the upgrade, I ran into some Elasticsearch parsing exceptions but I think I have those fixed because the errors went away and a new Elasticsearch index file was created. Using the Elastic HQ plugin I can see the Elasticsearch index is increasing it size and the number of docs, so I am pretty sure the data is getting to Elasticsearch. It's just not displaying correctly in Kibana. I tried removing the index pattern in Kibana and adding it back but that didn't seem to work. I even did a refresh. The index fields repopulated after the refresh/add. I am not sure what else to do. Any ideas or suggestions? Thanks in advance for the help!

Environment
syslog-->logstash-->redis-->logstash-->elasticsearch

  • elasticsearch-2.2.0-1
  • logstash-2.2.2-1
  • redis-2.8.19-2.el7.x86_64

What version of Kibana and ES?

Does the total Count on the discover tab (top right corner) match the count you get when hitting Elasticsearch directly? If not, try opening developer tools in your browser and look at the requests Kibana is sending to elasticsearch. On the Discover tab you should see a couple of msearch requests. Are they querying the indexes you'd expect?

Thanks for the reply Bargs.

Sorry about that. Meant to include the Kibana version.

Kibana 4.4.1
ES 2.2.0-1

If I am following your question, the count in Kibana and elasticsearch count are different. Kibana shows 0

Here's what I get when I query the ES index (only copied the first part.)

{
"took" : 15,
"timed_out" : false,
"_shards" : {
"total" : 5,
"successful" : 5,
"failed" : 0
},
"hits" : {
"total" : 2619460,
"max_score" : 1.0,
"hits" : [ {
"_index" : "logstash-2016.03.11",
"_type" : "cisco-asa",
"_id" : "AVNmb2fDzJwVbTGfD3xE",
"_score" : 1.0,
"_source" : {

Not real familiar with using the dev tools but I think this is what you're asking about

{"index":[".kibana-devnull"],"ignore_unavailable":true}
{"size":500,"sort":[{"@timestamp":{"order":"desc","unmapped_type":"boolean"}}],"query":{"filtered":{"query":{"query_string":{"analyze_wildcard":true,"query":""}},"filter":{"bool":{"must":[{"range":{"@timestamp":{"gte":1457721534039,"lte":1457735934040,"format":"epoch_millis"}}}],"must_not":[]}}}},"highlight":{"pre_tags":["@kibana-highlighted-field@"],"post_tags":["@/kibana-highlighted-field@"],"fields":{"":{}},"require_field_match":false,"fragment_size":2147483647},"aggs":{"2":{"date_histogram":{"field":"@timestamp","interval":"5m","time_zone":"America/Chicago","min_doc_count":0,"extended_bounds":{"min":1457721534039,"max":1457735934039}}}},"fields":["*","_source"],"script_fields":{},"fielddata_fields":["@timestamp"]}

Two posts above the _msearch is this
{"docs":[{"_index":".kibana","_type":"index-pattern","_id":"logstash-*"}]}

Any suggestions based on this?

What versions did you upgrade from?

That shouldn't be the case. What index pattern is Kibana showing as selected in the top left hand corner of the side bar?

@warkolm I think I was on the following versions

  • kibana-4.0.3
  • elasticsearch 1.7
  • logstash 1.5

@Bargs Kibana is showing "logstash-*"

Something strange to add to this. I checked this morning and I see data in
Kibana from 18:17-19:09 last night but it stops after that.

I noticed your timezone is set to America/Chicago. What timezone are you sending to Elasticsearch for your @timestamp date data? Elasticsearch will assume UTC if you don't provide a timezone, so this could be a source of trouble.

When you load the discover tab you should also see a request in your devtools for a url with _field_stats in the name. This sends a request to elasticsearch with the min and max datetime you've set in the time picker, which elasticsearch responds to with a list of indices that contain data for that time frame. You might want to check that request and response and make sure it's including the indices you expect.

If the correct indices are included in the _field_stats response, the next step I would take is to look at the _msearch request for the specific index you think the missing data should be in. It'll be the one where the request payload starts with {"index":["your-index-name"],"ignore_unavailable":true}. You'll see a date range filter in this request as well (in the form of millis since the epoch). Check and make sure the data you expect to see would pass this filter, try manually querying elasticsearch with the same date range filter and see what the results are.

@Bargs I am pretty sure I am sending America/Chicago timezone to Elasticsearch. How would I confirm that? Would that be in the output section on the Logstash config?

The min and max datetime in the _field_stats are correct (or at least match the filter I am setting in Kibana). I don't know how to confirm that the indices are there. How would I go about that? I see this in the Response tab (in the devtools):

_shards: Object
total:85
successful:85
failed: 0
indices: Object (this has an arrow, that you can expand but nothing is listed under this object)

Not real sure how to query Elasticsearch with the same date range. I was able to to query it with this and it pulled up some results.
localhost:9200/logstash-2016.03.11/_search?q=@timestamp:*&pretty=true

One thing I noticed was the "z" at the end of the timestamp. Is that normal. Here's what Elasticsearch is showing
"@timestamp" : "2016-03-11T15:57:27.000Z"

Thanks again for the help.

The empty indices object in your _field_stats response definitely indicates that no data matches the date/time range you've selected in Kibana. That means this is almost definitely a date/time issue.

The Z at the end of your @timestamp value indicates that the time is in UTC, which is the timezone elasticsearch automatically stores all dates in.

I'd take a look at your raw data and compare it to what's in elasticsearch. My guess is that you're sending dates to Elasticsearch that are in Chicago time, but don't actually contain timezone information so Elasticsearch assumes they're in UTC already. That would make it look like your events are lagging behind, just like you're seeing.

If you need some help with that comparison, feel free to post an example of a raw log line you've ingested, and it's matching document in Elasticsearch, and we should be able to track the problem down.

Sorry for the delay in my response, been doing a lot of research lately. It appears the logs are being graphed but it's a day behind. After your last comment, I really started looking at the timestamps in the Logstash logs and noticed it was a day behind. I can also confirm this by selecting yesterday in the time range option in Kibana and watch the logs grow as I refresh the page. The good news is that it's still processing the logs but it's just a day behind.

Now I just need to figure out what's causing the slowness. Is it Redis or Logstash? I have two Redis servers and two Logstash servers. The Redis servers are not load balanced but I have one Cisco ASA dumping to one Redis server and another ASA dumping to the other. Both Logstash servers have both Redis servers as their input in the config. I increased the pipeline workers thread (https://www.elastic.co/guide/en/logstash/current/pipeline.html) on the two Logstash servers, hoping that would help but it hasn't caught up yet.

I am debating on starting up a Kafka server as a comparison to Redis but that will take some time. If you have any suggestions or comments feel free to share, I'd love to hear them otherwise I'll probably have to end this thread and start a different one in the Logstash topic, since Kibana seems to be working fine.

Thanks again for all the help, appreciate it.

Is data backed up in redis?

I am not 100% sure. It kind of looks that way but I don't know how to tell if it's backed up in Redis or if Logstash is not processing the Redis input fast enough. Both Redis servers have a large (2-7GB) dump.rdb file in the /var/lib/redis folder. I am assuming that's the data that's backed up. Any suggestions?

I think the redis command is llist to see how much is in a list. I'd start there - or the redis docs to find out what your lists are like.