Could not retrieve Elasticsearch deprecation issues

I've got a test cluster with a separate monitoring cluster and a production cluster with a separate monitoring cluster. "Review deprecated settings and resolve issues" in the Kibana Upgrade Assistant works on all four clusters except in the production cluster where the message "Could not retrieve Elasticsearch deprecation issues." consistently appears.

es_deprecation_fail

I get zero results for that phrase on a Google search, which is disconcerting.

Each attempt to view the deprecated settings results in a pair of error messages in Kibana log:

{"type":"log","@timestamp":"2022-09-27T14:35:15+01:00","tags":["error","http"],"pid":1656,"message":"ConnectionError: connect EMFILE 10.70.12.10:9200 - Local (undefined:undefined)\n    at ClientRequest.onError (/usr/share/kibana/node_modules/@elastic/elasticsearch/lib/Connection.js:123:16)\n    at ClientRequest.emit (node:events:390:28)\n    at TLSSocket.socketErrorListener (node:_http_client:447:9)\n    at TLSSocket.emit (node:events:390:28)\n    at emitErrorNT (node:internal/streams/destroy:157:8)\n    at emitErrorCloseNT (node:internal/streams/destroy:122:3)\n    at processTicksAndRejections (node:internal/process/task_queues:83:21) {\n  meta: {\n    body: null,\n    statusCode: null,\n    headers: null,\n    meta: {\n      context: null,\n      request: [Object],\n      name: 'elasticsearch-js',\n      connection: [Object],\n      attempts: 0,\n      aborted: false\n    }\n  },\n  isBoom: true,\n  isServer: true,\n  data: null,\n  output: {\n    statusCode: 503,\n    payload: {\n      statusCode: 503,\n      error: 'Service Unavailable',\n      message: 'connect EMFILE 10.70.12.10:9200 - Local (undefined:undefined)'\n    },\n    headers: {}\n  },\n  [Symbol(SavedObjectsClientErrorCode)]: 'SavedObjectsClient/esUnavailable'\n}"}
{"type":"error","@timestamp":"2022-09-27T14:35:03+01:00","tags":[],"pid":1656,"level":"error","error":{"message":"Internal Server Error","name":"Error","stack":"Error: Internal Server Error\n    at HapiResponseAdapter.toInternalError (/usr/share/kibana/src/core/server/http/router/response_adapter.js:61:19)\n    at Router.handle (/usr/share/kibana/src/core/server/http/router/router.js:172:34)\n    at runMicrotasks (<anonymous>)\n    at processTicksAndRejections (node:internal/process/task_queues:96:5)\n    at handler (/usr/share/kibana/src/core/server/http/router/router.js:124:50)\n    at exports.Manager.execute (/usr/share/kibana/node_modules/@hapi/hapi/lib/toolkit.js:60:28)\n    at Object.internals.handler (/usr/share/kibana/node_modules/@hapi/hapi/lib/handler.js:46:20)\n    at exports.execute (/usr/share/kibana/node_modules/@hapi/hapi/lib/handler.js:31:20)\n    at Request._lifecycle (/usr/share/kibana/node_modules/@hapi/hapi/lib/request.js:371:32)\n    at Request._execute (/usr/share/kibana/node_modules/@hapi/hapi/lib/request.js:281:9)"},"url":"https://foo.domain:5601/api/upgrade_assistant/es_deprecations","message":"Internal Server Error"}

The test and production cluster Kibana instances are accessed via a load balancer and browser dev tools show that the request for deprecated Elasticsearch settings is done via an URL that doesn't contain the Kibana server hostname. E.g. the Kibana server hostname is foo.domain but the Kibana instance is accessed via https://logs.domain/my_kibana/api/upgrade_assistant/es_deprecations or for the test cluster https://logs-test.domain/my_kibana/api/upgrade_assistant/es_deprecations

Viewing the logs-test URL in a web browser consistently returns JSON formatted details of deprecation issues. Attempting to view the production URL consistently results in a delay of somewhere between 10 and 20 seconds and then a 500 Internal Server Error response. Attepmting to access the URL https://foo.domain:5601/api/upgrade_assistant/es_deprecations that's in the log above using curl on foo.domain results in the same behaviour.

I can't find any differences in the set up between test and production but the production cluster does contain a lot more data than any of the others. Could it be that something involved in the retrieval of deprecation issues cannot deal with there being 160 billion documents in 2400 open indices, plus another 2500 closed indices, most of which will have been created with Elasticsearch 6 and so, I believe, should appear in deprecation issues?

I can't find any relevant errors in the Elasticsearch log on the Kibana server or on the master node. Setting Kibana logging to
logging.root.level: debug
doesn't produce any more errors messages than the two above.

Can anyone give me some pointers on how to get some information about why that 500 Internal Server Error occurs, or offer an idea of why it's happening and/or what to do about it?

Hi @mikewillis!

Can you try calling the following Elasticsearch API and share the response from your production cluster:

GET /_migration/deprecations (docs)

Kibana is calling this API under the hood to display the Elasticsearch deprecation issues. This will help determine if it's a problem on the ES side or Kibana. Thanks!

The response from

$ curl https://$(hostname):9200/_migration/deprecations?pretty

is 1822068 characters of JSON formatted data spread over 41060 lines. Dumped to a file it's 1822068 bytes. I'm not going to share it all for confidentiality reasons but it is exactly the sort of data I expect. There's 4553 messages about indices which need to be re-indexed, e.g.

    "web-access-2022.03.29" : [
      {
        "level" : "critical",
        "message" : "Index created before 7.0",
        "url" : "https://ela.st/es-deprecation-7-reindex",
        "details" : "This index was created with version 6.8.22 and is not compatible with 8.0. Reindex or remove the index before upgrading.",
        "resolve_during_rolling_upgrade" : false
      }
    ],

Lots of messages about how we have deprecated settings in elasticsearch.yml e.g.

    {
      "level" : "critical",
      "message" : "Setting [node.ingest] is deprecated",
      "url" : "https://ela.st/es-deprecation-7-node-roles",
      "details" : "Remove the [node.ingest] setting. Set [node.roles] and include the [ingest] role. (nodes impacted: [big, list, of, comma, separated, node, names])",
      "resolve_during_rolling_upgrade" : false
    },

I'm assuming that Kibana is merely displaying the data that comes out of that endpoint in a nice human eyeball friendly way, (i.e. it's not enriching it with information what to do about problems, the information about what to do seems to be in the raw JSON), so we can just work with the raw JSON if Kibana can't be made to display it. The longer we wait to migrate to 8 the more of those indices that need re-indexing will be deleted by Curator. :smiley:

Kibana is now displaying the Elasticsearch deprecation issues for the production issues.


I have not actively changed anything to achieve this, but a lot of indices that would have needed re-indexing have been automatically deleted by Curator due to their age. Perhaps there is something in the code which makes the deprecation issues visible in Kibana which cannot deal with there being "too many" issues.

I have the same issue but with the kibana deprecation issues. I can see the elastic ones.
Any way to check those?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.