If we have a fleet agent installed and then that agent goes offline with no “line of sight” to the elasticsearch/fleet server, how are events handled that are captured through integrations. Say you had an ubuntu host with elastic defend enabled and auditd integration enabled. How long would those events be cached and where? Is this configurable somewhere? I have an environment where I would like to take advantage of the agents but they would only be online during scheduled times. Thank you.
Elastic Agent has a memory queue only and you can configure the queue size in the output settings, once the queue is full, new events will be dropped until the Agent is able to send logs agains.
I'm not sure about Elastic Defend as I do not use it, but ccording to this answer on another question it has a disk buffer, it is not specified the size of it and I could not find anything on the documentation.
Defend will buffer up to 500MB of events on disk. This is a hard-coded limit, not user-configurable. If there's an overflow, the oldest events are discarded to make space.
Thank you, any idea on integrations. Such as auditd logs for linux? Is it just a matter of how large the actual auditd log gets and then the agent just has a breadcrumb as to what event was last pushed to the server? Thank you!
Auditd, and any other integration that is not Elastic Defend, will use the memory queue.
The queue size is defined in number of events, the default value is 3200, but you can change it.
If you have other integrations like system or metrics, their events will use the same queue.
Once the queue is full, the Agent stops accepting new events and only will start accepting new events after the connection with the output is working again.
Oh interesting. So the elastic agent is really not a good solution at all for endpoints that might be airgapped for an extended period of time? I thought there might be some sort of disk based cache that could be configured.
Do you have a reference where that queue size is controlled? Thank you very much.