Kibana Discovery panel not showing results for Logstash index with millions of records

EDITED: Overnight, the problem I'm complaining about in this post stopped happening, probably because the newest index based on the one I uploaded from Filebeat didn't have whatever problem the old one had, so you can disregard this post unless you have some comment about what the cause could've been. Thanks!

I'm switching to a Kafka pipeline for logging events from Beats and ingesting them in my Elasticsearch 6.2.3 cluster. I have everything working but for some reason, the only documents that are shown in Kibana Discovery panel for the logstash-* index are those being sent using the older pipeline that sends Filebeat data directly to Elasticsearch.

This is a ten-node cluster with three recently added dedicated master nodes, four hot nodes which are targeted for indexing by the Logstash configuration, and three warm nodes with big spinning disks for archival purposes.

Filebeats collects a variety of logs and sends them to a three node Kafka cluster (Kafka 2.11-1.0.0) where they are buffered in a single topic named "filebeats". Six Logstash instances are consuming these messages and sending them to Elasticsearch. I know the pipeline is working because I can issue a search query in the Kibana dev console to that index and I get hundreds of millions of documents, showing data that have been recently gathered by Filebeat.

Using the dev console, I replaced the logstash template with the current one exported from filebeats with the settings to require the hot nodes for new indexes. There are no errors being reported in any system in the pipeline. The Kibana server shows around 15.5K documents being indexed in Elasticsearch every second. I can also see the indexing rates in the Kibana monitoring dashboards that are consistent with the data being collected.

I've also captured the JSON documents using an output to file filter on one of the Logstash instances. The documents are well formed and don't trigger any errors on my JSON linter.

A manual search using this query

GET logstash-2018.04.08/_search
    "query": { "match_all": {} }


  "took": 531,
  "timed_out": false,
  "_shards": {
    "total": 5,
    "successful": 5,
    "skipped": 0,
    "failed": 0
  "hits": {
    "total": 301909482,
    "max_score": 1,
    "hits": [
        "_index": "logstash-2018.04.08",
        "_type": "doc",
        "_id": "Fq59pmIB0h-hgAKBGF_5",
        "_score": 1,
        "_source": {
          "source": "/var/log/httpd/access_log",
          "prospector": {
            "type": "log"
          "@timestamp": "2018-04-08T17:02:35.596Z",
          "offset": 20903296,
          "tags": [
          "message": """ - - [08/Apr/2018:11:29:34 +0000] "GET / HTTP/1.0" 200 561 "-" "-"""",
          "@version": "1",
          "beat": {
            "version": "6.2.2",
            "name": "HOSTNAME REDACTED",
            "hostname": "HOSTNAME REDACTED"
      }, ...

So, at this point there were 301909482 documents in that index for the day.

I've deleted and recreated the logstash-* index pattern in Kibana and still only see the old format document.

I'm prepared to believe that I did something dumb to the logstash template that's preventing most of the documents from showing up, or perhaps I'm missing some key field or metadata that's getting altered in my Kafka pipeline, but I have no idea what it could be. Any advice would be appreciated.

@Bargs tagging you in case if you know whats happening? I want to know too :slight_smile:


I notice the timestamp in the @message field and the @timestamp field differ. Looks a bit suspicious to me, and could be the cause of documents looking like they’re missing for a given time range and then suddenly appearing. Just a guess. To dig deeper, you can always grab the request discover is sending to elasticsearch from the network tab of your browser’s dev tools and then play around with that in Console.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.