Stack monitoring Error fetching alert status


I have an Elasticsearch cluster that is in version 7.10, and a few days ago started to display the message "Error fetching alert status" on the Stack monitoring screen. At the time I was still on version 7.9.3 of Elastic and even after upgrading to version 7.10.0 the error persists.
In Kibana now there is no more monitoring information for the cluster as it appears in the image below.

Has anyone had this problem and know what it can be?
The only thing I found in the elasticsearch logs was the error below:

[2020-12-01T10:28:28,827][WARN ][o.e.x.m.MonitoringService] [tio-master01] monitoring execution failed
org.elasticsearch.xpack.monitoring.exporter.ExportException: failed to flush export bulks
at org.elasticsearch.xpack.monitoring.exporter.ExportBulk$Compound.lambda$doFlush$0( [x-pack-monitoring-7.10.0.jar:7.10.0]
at org.elasticsearch.xpack.monitoring.exporter.local.LocalBulk.lambda$doFlush$1( [x-pack-monitoring-7.10.0.jar:7.10.0]
Caused by: org.elasticsearch.xpack.monitoring.exporter.ExportException: failed to flush export bulk [default_local]

Hi @dedi27,

Welcome to the community!

Can you check the Kibana server log for any errors or warnings and paste them here? Can you also share your kibana.yml configuration file? Feel free to scrub any personal information from it.

Hi Chris,

When I access the stack monitoring screen the eror log below appear in the kibana logs:

{"type":"log","@timestamp":"2020-12-01T18:37:27Z","tags":["error","http"],"pid":1048,"message":"{ Error: Saved object [task/928119f0-33df-11eb-82cd-b779270f0d80] not found\n    at Function.createGenericNotFoundError (/usr/share/kibana/src/core/server/saved_objects/service/lib/errors.js:136:37)\n    at SavedObjectsRepository.get (/usr/share/kibana/src/core/server/saved_objects/service/lib/repository.js:916:46)\n    at process._tickCallback (internal/process/next_tick.js:68:7)\n  data: null,\n  isBoom: true,\n  isServer: false,\n  output:\n   { statusCode: 404,\n     payload:\n      { statusCode: 404,\n        error: 'Not Found',\n        message:\n         'Saved object [task/928119f0-33df-11eb-82cd-b779270f0d80] not found' },\n     headers: {} },\n  reformat: [Function],\n  typeof: [Function: notFound],\n  [Symbol(SavedObjectsClientErrorCode)]: 'SavedObjectsClient/notFound' }"}
{"type":"error","@timestamp":"2020-12-01T18:37:27Z","tags":[],"pid":1048,"level":"error","error":{"message":"Internal Server Error","name":"Error","stack":"Error: Internal Server Error\n    at HapiResponseAdapter.toInternalError (/usr/share/kibana/src/core/server/http/router/response_adapter.js:69:19)\n    at Router.handle (/usr/share/kibana/src/core/server/http/router/router.js:177:34)\n    at process._tickCallback (internal/process/next_tick.js:68:7)"},"url":{"protocol":null,"slashes":null,"auth":null,"host":null,"port":null,"hostname":null,"hash":null,"search":null,"query":{},"pathname":"/api/monitoring/v1/alert/p4RsPfDjSVqDb5tz-jSXDA/status","path":"/api/monitoring/v1/alert/p4RsPfDjSVqDb5tz-jSXDA/status","href":"/api/monitoring/v1/alert/p4RsPfDjSVqDb5tz-jSXDA/status"},"message":"Internal Server Error"}
{"type":"response","@timestamp":"2020-12-01T18:37:27Z","tags":[],"pid":1048,"method":"post","statusCode":500,"req":{"url":"/api/monitoring/v1/alert/p4RsPfDjSVqDb5tz-jSXDA/status","method":"post","headers":{"host":"","connection":"keep-alive","content-length":"55","kbn-version":"7.10.0","user-agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.66 Safari/537.36 Edg/87.0.664.41","content-type":"application/json","accept":"*/*","origin":"","sec-fetch-site":"same-origin","sec-fetch-mode":"cors","sec-fetch-dest":"empty","referer":"","accept-encoding":"gzip, deflate, br","accept-language":"pt-BR,pt;q=0.9,en;q=0.8,en-GB;q=0.7,en-US;q=0.6"},"remoteAddress":"","userAgent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.66 Safari/537.36 Edg/87.0.664.41","referer":""},"res":{"statusCode":500,"responseTime":436,"contentLength":9},"message":"POST /api/monitoring/v1/alert/p4RsPfDjSVqDb5tz-jSXDA/status 500 436ms - 9.0B"}

My kibana configuration is this:

It looks like you are running into which has a fix up for review now.

I'm also running into this error. Until the patch is available, is there any way to cancel/delete the missing task to temporarily get past this error?

One thing to try is to go to Alert Managment within Kibana and delete the alerts related to Stack Monitoring:

Then, visit the Stack Monitoring UI again (which will re-create them) and see if that fixes it

1 Like

That worked perfectly, thank you!

It´s worked for me too, thank's!

I did the procedure by removing the alarms CPU Usage and Missing monitoring data that were causing the problem in the Stack Monitoring UI, it returned to work, but some time later, when he recreates the alerts he returns to the problem in the Stack monitoring UI.
Again I have to delete the alarms to get back to work.

We merged recently and that should be publicly available soon. This will ensure the Stack Monitoring UI still loads even if this error occurs.

This behavior is strange so I also opened to investigate it further.

Finally this week I could upgrade my ELK stack to version 7.10.1 and monitoring the stack back to work again.
Thank you for your help.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.