Stack monitoring does not show the cluster information and has some errors

Hi,

My Elasticsearch cluster has not shown monitoring information for some time.
I tried to solve the problem by enabling metricbeat metrics, but when I activate kibana some errors appear as described below.
I can't figure out what's causing this, but I noticed it started after I activated some features like setting up an elastic agent test.
Can anyone give me a light of what could cause this?

"type":"log","@timestamp":"2021-03-11T11:24:06-03:00","tags":["warning","plugins","securitySolution"],"pid":1962,"message":"Unable to verify endpoint policies in line with license change: failed to fetch package policies: missing authentication credentials for REST request [/.kibana/_search?size=100&from=0&rest_total_hits_as_int=true]: security_exception"}
{"type":"log","@timestamp":"2021-03-11T11:31:28-03:00","tags":["error","plugins","monitoring","monitoring"],"pid":1962,"message":"Error: Unable to find the cluster in the selected time range. UUID: p4RsPfDjSVqDb5tz-jSXDA\n    at getClustersFromRequest (/usr/share/kibana/x-pack/plugins/monitoring/server/lib/cluster/get_clusters_from_request.js:98:32)\n    at runMicrotasks (<anonymous>)\n    at processTicksAndRejections (internal/process/task_queues.js:93:5)\n    at Object.handler (/usr/share/kibana/x-pack/plugins/monitoring/server/routes/api/v1/cluster/cluster.js:59:20)\n    at handler (/usr/share/kibana/x-pack/plugins/monitoring/server/plugin.js:363:28)\n    at Router.handle (/usr/share/kibana/src/core/server/http/router/router.js:163:30)\n    at handler (/usr/share/kibana/src/core/server/http/router/router.js:124:50)\n    at module.exports.internals.Manager.execute (/usr/share/kibana/node_modules/@hapi/hapi/lib/toolkit.js:45:28)\n    at Object.internals.handler (/usr/share/kibana/node_modules/@hapi/hapi/lib/handler.js:46:20)\n    at exports.execute (/usr/share/kibana/node_modules/@hapi/hapi/lib/handler.js:31:20)\n    at Request._lifecycle (/usr/share/kibana/node_modules/@hapi/hapi/lib/request.js:312:32)\n    at Request._execute (/usr/share/kibana/node_modules/@hapi/hapi/lib/request.js:221:9) {\n  data: null,\n  isBoom: true,\n  isServer: false,\n  output: {\n    statusCode: 404,\n    payload: {\n      statusCode: 404,\n      error: 'Not Found',\n      message: 'Unable to find the cluster in the selected time range. UUID: p4RsPfDjSVqDb5tz-jSXDA'\n    },\n    headers: {}\n  }\n}"}
{"type":"log","@timestamp":"2021-03-11T11:36:06-03:00","tags":["warning","plugins","monitoring","monitoring","kibana-monitoring"],"pid":1962,"message":"Error: [circuit_breaking_exception] [parent] Data too large, data for [<http_request>] would be [6129073082/5.7gb], which is larger than the limit of [6120328396/5.6gb], real usage: [6129070248/5.7gb], new bytes reserved: [2834/2.7kb], usages [request=0/0b, fielddata=12615/12.3kb, in_flight_requests=7462/7.2kb, model_inference=0/0b, accounting=104851908/99.9mb], with { bytes_wanted=6129073082 & bytes_limit=6120328396 & durability=\"PERMANENT\" }\n    at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:349:15)\n    at checkRespForFailure (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:306:7)\n    at HttpConnector.<anonymous> (/usr/share/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:173:7)\n    at IncomingMessage.wrapper (/usr/share/kibana/node_modules/lodash/lodash.js:4991:19)\n    at IncomingMessage.emit (events.js:327:22)\n    at endReadableNT (internal/streams/readable.js:1327:12)\n    at processTicksAndRejections (internal/process/task_queues.js:80:21)"}
{"type":"log","@timestamp":"2021-03-11T11:36:06-03:00","tags":["warning","plugins","monitoring","monitoring","kibana-monitoring"],"pid":1962,"message":"Unable to bulk upload the stats payload to the local cluster"}
{"type":"log","@timestamp":"2021-03-11T11:36:09-03:00","tags":["error","plugins","fleet"],"pid":1962,"message":"[parent] Data too large, data for [<http_request>] would be [6380729444/5.9gb], which is larger than the limit of [6120328396/5.6gb], real usage: [6380728488/5.9gb], new bytes reserved: [956/956b], usages [request=0/0b, fielddata=12615/12.3kb, in_flight_requests=5584/5.4kb, model_inference=0/0b, accounting=105031568/100.1mb]: circuit_breaking_exception"}

It looks like Kibana plugins are trying to send data to Elasticsearch, but ES is holding too much data in memory at the moment, and is blocking it. You should look into the health of your cluster

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.