I have a been reconfiguring my elastic cloud setup to try and improve performance. I created a frozen tier node, alot of indices have moved from warm to searchable snapshots. However looking at the CPU of the frozen node i can see CPU spikes, these spikes coincide with spikes in alerts running and field_caps events on that node - Elasticsearch.audit.action: indices:data/read/field_caps[index][s]
I have used winlogbeats with rollover and have a about 150 winlogbeat indices in the searchable snapshots that the field_caps event runs against.
I am looking to see if this is expected/normal. I am moving away from winlogbeats to the fleet integrations/streams and hoping this will help. It seems a bit odd that old indices are still causing cpu usage for alerts.
Hoping that someone can help shed some light on this so that i can better tune/plan for the future.
I did use the ILM GUI, stored on hot for about 7 days, warm for 30 days, then frozen (searchable snapshots in Azure). The look back on the alerts is at max 60 minutes.
field_caps is not something i have run, just a events that i see on all of the nodes but trying to corrolate between the siem alerts running, events on the frozen node and the CPU spikes. Within the event, the field Elasticsearch.audit.indices lists all of the winlogbeat-00000xx indexes from hot, warm to frozen.