Cluster Monitoring Error 7.12.0

Hi,

I'm in the process of a rolling upgrade from 7.10.0 to 7.12.0 my process is to drop an ELS server from the cluster, and bring up a new blank one on the latest version and let it reassign the shards.

During this process which normally works absolutely fine, im having an issue where i cannot access the Cluster monitoring page after a period of time (it works fine to start but if i move off and go back) I will get the following error.

Apr 21 10:59:13 KIBANA-01 kibana: {"type":"log","@timestamp":"2021-04-21T09:59:13Z","tags":["error","elasticsearch","data"],"pid":4481,"message":"[x_content_parse_exception]: [1:11492] [bool] failed to parse field [filter]"}
Apr 21 10:59:13 KIBANA--01 kibana: {"type":"log","@timestamp":"2021-04-21T09:59:13Z","tags":["error","http"],"pid":4481,"message":"{ ResponseError: [1:11492] [bool] failed to parse field [filter]: x_content_parse_exception\n    at IncomingMessage.response.on (/usr/share/kibana/node_modules/@elastic/elasticsearch/lib/Transport.js:272:25)\n    at IncomingMessage.emit (events.js:203:15)\n    at endReadableNT (_stream_readable.js:1145:12)\n    at process._tickCallback (internal/process/next_tick.js:63:19)\n  name: 'ResponseError',\n  meta:\n   { body: { error: [Object], status: 400 },\n     statusCode: 400,\n     headers:\n      { 'content-type': 'application/json; charset=UTF-8',\n        'content-length': '2467' },\n     meta:\n      { context: null,\n        request: [Object],\n        name: 'elasticsearch-js',\n        connection: [Object],\n        attempts: 0,\n        aborted: false } },\n  isBoom: true,\n  isServer: false,\n  data: null,\n  output:\n   { statusCode: 400,\n     payload:\n      { message:\n         '[1:11492] [bool] failed to parse field [filter]: x_content_parse_exception',\n        statusCode: 400,\n        error: 'Bad Request' },\n     headers: {} },\n  reformat: [Function],\n  [Symbol(SavedObjectsClientErrorCode)]: 'SavedObjectsClient/badRequest' }"}
Apr 21 10:59:13 KIBANA--01 kibana: {"type":"log","@timestamp":"2021-04-21T09:59:13Z","tags":["error","elasticsearch","data"],"pid":4481,"message":"[x_content_parse_exception]: [1:11479] [bool] failed to parse field [filter]"}
Apr 21 10:59:13 KIBANA--01 kibana: {"type":"error","@timestamp":"2021-04-21T09:59:12Z","tags":[],"pid":4481,"level":"error","error":{"message":"Internal Server Error","name":"Error","stack":"Error: Internal Server Error\n    at HapiResponseAdapter.toInternalError (/usr/share/kibana/src/core/server/http/router/response_adapter.js:69:19)\n    at Router.handle (/usr/share/kibana/src/core/server/http/router/router.js:177:34)\n    at process._tickCallback (internal/process/next_tick.js:68:7)"},"url":{"protocol":null,"slashes":null,"auth":null,"host":null,"port":null,"hostname":null,"hash":null,"search":null,"query":{},"pathname":"/api/monitoring/v1/alert/6MRLZNXMTxm3aY4-RHPBcA/status","path":"/api/monitoring/v1/alert/6MRLZNXMTxm3aY4-RHPBcA/status","href":"/api/monitoring/v1/alert/6MRLZNXMTxm3aY4-RHPBcA/status"},"message":"Internal Server Error"}

Where can i start to look at the route cause? If I drop the server again it works fine.`

The additional change was to move to Metricbeat for Cluster monitoring on this node.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.