Kibana Discover generates Error Loading data but data shows up in Devtool

I have heartbeat logging to one single index. When I go to Discover and set it for any time frame (I usually use list 60 minutes) I get Error loading data display on the screen.

When I click on See Full Error this is what I get:

construct@[native code]
Wrapper@http://10.200.100.100:5601/32141/bundles/commons.bundle.js:3:466074
construct@[native code]
http://10.200.100.100:5601/32141/bundles/commons.bundle.js:3:464861
HttpFetchError@http://10.200.100.100:5601/32141/bundles/commons.bundle.js:3:467835
_callee3$@http://10.200.100.100:5601/32141/bundles/commons.bundle.js:3:1292433
l@http://10.200.100.100:5601/32141/bundles/kbn-ui-shared-deps/kbn-ui-shared-deps.js:288:969221
http://10.200.100.100:5601/32141/bundles/kbn-ui-shared-deps/kbn-ui-shared-deps.js:288:968971
asyncGeneratorStep@http://10.200.100.100:5601/32141/bundles/commons.bundle.js:3:1285920
_next@http://10.200.100.100:5601/32141/bundles/commons.bundle.js:3:1286249
promiseReactionJob@[native code]

Not sure how I troubleshoot this. If I go into Kibana Dev Tools and run a query against the same index I can see data. For example, if I run this code:

GET /heartbeat-*/_search
 {
   "query": {
     "term": {
       "monitor.status": {
         "value":"down"
       }
     }
   }
 }

I get this in return (small sample):

{
  "took" : 14,
  "timed_out" : false,
  "_shards" : {
    "total" : 1,
    "successful" : 1,
    "skipped" : 0,
    "failed" : 0
  },
  "hits" : {
    "total" : {
      "value" : 1,
      "relation" : "eq"
    },
    "max_score" : 7.1974354,
    "hits" : [
      {
        "_index" : "heartbeat-7.8.1",
        "_type" : "_doc",
        "_id" : "5el-cHUBJiba8jT5l2Hs",
        "_score" : 7.1974354,
        "_source" : {
          "@timestamp" : "2020-10-28T18:35:30.000Z",
          "tags" : [
            "heartbeat-icmp"
          ],
          "event" : {
            "dataset" : "uptime"
          },
          "agent" : {
            "ephemeral_id" : "0332bfc3-c1a3-485e-9f1c-3e38b6be5b56",
            "id" : "f1a97784-0c2e-401d-80ba-c46e686ae269",
            "name" : "navstlelastic02.navvis.local",
            "type" : "heartbeat",
            "version" : "7.8.1",
            "hostname" : "navstlelastic02.navvis.local"
          },
          "ecs" : {
            "version" : "1.5.0"
          },
          "observer" : {
            "hostname" : "navstlelastic02.navvis.local",
            "ip" : [
              "10.200.100.102",
              "fe80::e095:3e84:ec95:b22"
            ],
            "mac" : [
              "52:82:aa:68:12:20"
            ]
          },
          "error" : {
            "type" : "io",
            "message" : "ping timeout"
          },
          "summary" : {
            "down" : 1,
            "up" : 0
          },
          "monitor" : {
            "type" : "icmp",
            "timespan" : {
              "gte" : "2020-10-28T18:35:30.000Z",
              "lt" : "2020-10-28T18:35:46.000Z"
            },
            "check_group" : "580747e5-194c-11eb-81d6-5282aa681220",
            "ip" : "10.200.100.100",
            "status" : "down",
            "duration" : {
              "us" : 16000284
            },
            "id" : "auto-icmp-0X4A28B0E7BFE51856-4ad1ecda21c53bb1",
            "name" : ""
          },
          "url" : {
            "full" : "icmp://10.200.100.100",
            "scheme" : "icmp",
            "domain" : "10.200.100.100"
          }
        }
      }
    ]
  }
}

Hi,

I'm afraid the error message itself is not very descriptive. Do you have access to Kibana log?. By default, Kibana outputs the log to stdout, but you can change that setting in config.yml so that it's written to a file:

(Please note Kibana will need write access to the path you specify and it will require a server restart)

Investigating error logs around the time of the exception is the best way to go about this.

Is the error present only on this index pattern or does the error occur in other index patterns as well?

So I changed my Kibana config to write everything out to the a kibana.log file. Here is what I'm showing:

{"type":"log","@timestamp":"2020-11-05T15:44:57Z","tags":["warning","plugins","usageCollection","collector-set"],"pid":750,"message":"{ Error: [illegal_argument_exception] Text fields are not optimised for operations that require per-document field data like aggregations and sorting, so these operations are disabled by default. Please use a keyword field instead. Alternatively, set fielddata=true on [monitor.id] in order to load field data by uninverting the inverted index. Note that this can use significant memory.\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:349:15)\n at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:306:7)\n at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:173:7)\n at IncomingMessage.wrapper (/usr/share/kibana/node_modules/elasticsearch/node_modules/lodash/lodash.js:4929:19)\n at IncomingMessage.emit (events.js:203:15)\n at endReadableNT (_stream_readable.js:1145:12)\n at process._tickCallback (internal/process/next_tick.js:63:19)\n status: 400,\n displayName: 'BadRequest',\n message:\n '[illegal_argument_exception] Text fields are not optimised for operations that require per-document field data like aggregations and sorting, so these operations are disabled by default. Please use a keyword field instead. Alternatively, set fielddata=true on [monitor.id] in order to load field data by uninverting the inverted index. Note that this can use significant memory.',\n path: '/heartbeat-7*/_search',\n query: {},\n body:\n { error:\n { root_cause: [Array],\n type: 'search_phase_execution_exception',\n reason: 'all shards failed',\n phase: 'query',\n grouped: true,\n failed_shards: [Array],\n caused_by: [Object] },\n status: 400 },\n statusCode: 400,\n response:\n '{\"error\":{\"root_cause\":[{\"type\":\"illegal_argument_exception\",\"reason\":\"Text fields are not optimised for operations that require per-document field data like aggregations and sorting, so these operations are disabled by default. Please use a keyword field instead. Alternatively, set fielddata=true on [monitor.id] in order to load field data by uninverting the inverted index. Note that this can use significant memory.\"}],\"type\":\"search_phase_execution_exception\",\"reason\":\"all shards failed\",\"phase\":\"query\",\"grouped\":true,\"failed_shards\":[{\"shard\":0,\"index\":\"heartbeat-7.8.1\",\"node\":\"ffEAPUQ6RAaehXIBsYde4A\",\"reason\":{\"type\":\"illegal_argument_exception\",\"reason\":\"Text fields are not optimised for operations that require per-document field data like aggregations and sorting, so these operations are disabled by default. Please use a keyword field instead. Alternatively, set fielddata=true on [monitor.id] in order to load field data by uninverting the inverted index. Note that this can use significant memory.\"}}],\"caused_by\":{\"type\":\"illegal_argument_exception\",\"reason\":\"Text fields are not optimised for operations that require per-document field data like aggregations and sorting, so these operations are disabled by default. Please use a keyword field instead. Alternatively, set fielddata=true on [monitor.id] in order to load field data by uninverting the inverted index. Note that this can use significant memory.\",\"caused_by\":{\"type\":\"illegal_argument_exception\",\"reason\":\"Text fields are not optimised for operations that require per-document field data like aggregations and sorting, so these operations are disabled by default. Please use a keyword field instead. Alternatively, set fielddata=true on [monitor.id] in order to load field data by uninverting the inverted index. Note that this can use significant memory.\"}}},\"status\":400}',\n toString: [Function],\n toJSON: [Function] }"}

Now the strange thing is, this had been working but then stopped.

Kind of at a loss for the fix. I tried dropping the heartbeat index all together and restarted the heartbeat service thinking this was something that was corrupt. The heartbeat index doesn't have a whole lot going on with it at the moment so I have some flexibility. I'm also not sure if this is a Kibana issue or an Elasticsearch issue. I'm not performing any scripted fields in the Kibana indexes for this data.

Correction! I did have a scripted field for the heartbeat index. I removed that scripted field and it started working again and removed the error. Here is a copy of the scripted field:

The error here is originating from Elasticsearch, Kibana just logs it. As the error message says, you have two options - to change the type to keyword or set fielddata to true. I would normally suggest turning this into a keyword, but I'm not familiar with heartbeat much - is this something you have control over?

The root cause here is that heartbeat's index template was somehow not applied, so the default mapping wound up in effect. The typical cause is deleting heartbeat indices while heartbeat is still running. The best solution is to stop all heartbeats, delete the affected indices, then restart heartbeat. Heartbeat will fix the template when it starts up.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.