Since I found no convenient or supported way to get elastic-agents out of “stuck” or “erroneously displayed” states in kibana→fleet→agents, I did it inconveniently and perhaps unsupportedly this way (thanks @lesio for pointing me this direction):
Disclaimer: don’t try this on your production ELK.. I guess..
- Get yourself some privileges on an internal, hidden system-index:
- Discover this index - e.g.:
- Filter for specific Agents - e.g. via “local_metadata.host.hostname”:
-
Delete all ancient, antique, old or not-recent documents from the index (e.g. all docs except the last one in above screenshot..)
-
Fix the current document - painlessly (but highly discouraged..) until it looks equivalent to the agent’s real state (which you should check locally - we often see already upgraded agents on local systems that are displayed with a lower version in kibana - resisting every upgrade attempt via fleet..)
e.g.: clear the “audit_unenrolled_reason”:
POST .fleet-agents/_update_by_query
{
"query": {
"term": {
"agent.id": "2x7x44f4-7954-4478-xxxx-2c07xx907f17"
}
},
"script": {
"source": "ctx._source.audit_unenrolled_reason = null;",
"lang": "painless"
}
}
I cleared the “orphaned” string last - after resetting all possibly incorrect date-fields (using painless like above..)
btw: I hope we get a convenient and supported way to fix this when the future major-version comes along:
#! this request accesses system indices: [.fleet-agents-7], but in a future major version, direct access to system indices will be prevented by default


