Upgrade to Kibana 7.11 results in 'Unexpected token < in JSON at position 0' for eaggs

Following an upgrade to 7.11 for elasticsearch and kibana there now appear many Error states on dashboard panels.

Inspect shows the response for the request to be
{
“message”: “Unexpected token < in JSON at position 0”,
“code”: “STREAM”
}

When using Inspect to investigate and copying the request to 'dev tools' this shows no Error condition.

I'm at loss for a way to analyse+fix this

An upgrade to 7.11.0-1 did not resolve this issue either

Hmm, are there certain types of visualizations causing this error?

Could you add the following to your config.yml:

elasticsearch.logQueries: true
logging.verbose: true

Then, the server logs should show the raw queries sent to Elasticsearch, which might make it easier to see which request is causing the errors and exactly what is causing the error.

Thanks for that :slight_smile:

What is notable is the Error almost always has a request starting with aggs and reports eaggs, some graphical visualisations work fine. Other such as PIE throw the error. I found copying the request as part of a curl request does not show the error.

Noteworthy is, one anomaly i overlooked is i added the below key but have not configured anything else, since other panels work fine i assume this unrelated

kibana-keystore list
xpack.encryptedSavedObjects.encryptionKey

I do see a lot of these

  "type": "log",
  "@timestamp": "2021-02-18T23:06:16+00:00",
  "tags": [
    "debug",
    "http",
    "server",
    "Kibana",
    "cookie-session-storage"
  ],
  "pid": 29521,
  "message": "Error: Unauthorized"
}

I don't get why this could or should happen after nothing but 'yum uprade'

Also, this message showed up

{"type":"log","@timestamp":"2021-02-15T21:11:08+00:00","tags":["warning","plugins","monitoring","monitoring","kibana-monitoring"],"pid":2283,"message":"Error: Cluster client cannot be used after it has been closed.\n at LegacyClusterClient.assertIsNotClosed (/usr/share/kibana/src/core/server/elasticsearch/legacy/cluster_client.js:195:13)\n at LegacyClusterClient.callAsInternalUser (/usr/share/kibana/src/core/server/elasticsearch/legacy/cluster_client.js:115:12)\n at sendBulkPayload (/usr/share/kibana/x-pack/plugins/monitoring/server/kibana_monitoring/lib/send_bulk_payload.js:22:18)\n at BulkUploader._onPayload (/usr/share/kibana/x-pack/plugins/monitoring/server/kibana_monitoring/bulk_uploader.js:209:43)\n at BulkUploader._fetchAndUpload (/usr/share/kibana/x-pack/plugins/monitoring/server/kibana_monitoring/bulk_uploader.js:195:20)\n at runMicrotasks (<anonymous>)\n at processTicksAndRejections (internal/process/task_queues.js:93:5)"}

this one is problably most relevant to the error reported

{"type":"log","@timestamp":"2021-02-18T23:03:20+00:00","tags":["error","elasticsearch","data"],"pid":29521,"message":"409\nPUT /.kibana_task_manager/_create/task%3AActions-actions_telemetry?refresh=false\n{"task":{"taskType":"actions_telemetry","state":"{}","params":"{}","attempts":0,"scheduledAt":"2021-02-18T23:03:19.512Z","startedAt":null,"retryAt":null,"runAt":"2021-02-18T23:03:19.512Z","status":"idle"},"type":"task","references":,"migrationVersion":{"task":"7.6.0"},"updated_at":"2021-02-18T23:03:19.512Z"} [version_conflict_engine_exception]: [task:Actions-actions_telemetry]: version conflict, document already exists (current version [13])"}

When copying the request part from Inspect there appear to be zero responses, i admit it can be erroneous pointing it at said index but i share it anyway

Response

{
  "took" : 7,
  "timed_out" : false,
  "_shards" : {
    "total" : 1,
    "successful" : 1,
    "skipped" : 0,
    "failed" : 0
  },
  "hits" : {
    "total" : {
      "value" : 0,
      "relation" : "eq"
    },
    "max_score" : null,
    "hits" : [ ]
  },
  "aggregations" : {
    "table" : {
      "buckets" : [ ]
    }
  }
}

Request

curl -X GET "localhost:9200/.kibana_2/_search?pretty" -H 'Content-Type: application/json' -d'{
  "size": 0,
  "aggs": {
    "table": {
      "composite": {
        "size": 10000,
        "sources": [
          {
            "stk1": {
              "terms": {
                "field": "client.ip"
              }
            }
          },
          {
            "stk2": {
              "terms": {
                "field": "server.ip"
              }
            }
          }
        ]
      }
    }
  },
  "query": {
    "bool": {
      "must": [
        {
          "range": {
            "@timestamp": {
              "gte": "2021-02-13T03:38:41.728Z",
              "lte": "2021-02-19T08:38:41.728Z",
              "format": "strict_date_optional_time"
            }
          }
        }
      ],
      "filter": [
        {
          "match_all": {}
        },
        {
          "match_phrase": {
            "event.dataset": {
              "query": "connection"
            }
          }
        }
      ],
      "should": [],
      "must_not": []
    }
  }
}'

Obvious takes last, sorry for the long way round. The browser debugger shows

XHR failed loading: POST "https://myhost/internal/bsearch ".
bfetch.plugin.js:1 POST https://myhost/internal/bsearch 502 (Bad Gateway)
fetch_streaming_fetchStreaming @ bfetch.plugin.js:1

Trying not to feel stupid for posting here, could this be a consequence of a breaking change ?

https://myhost/app/fleet#/fleet

shows superuser privileges are required and the breaking changes document /app/IngestManager is now /app/fleet

Solved:

downgrade to kibana 7.10 from 7.11
remove indices as suggested in the log, even if just one instance is running (.kibana_3 .kibana_2) then stop/start kibana
kibana may still complain it is requiring to remove .kibana_1, however, this destroys all dashboards

reimporting the dashboards for kibana solves any issue

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.