File Storage being swallowed up

I have 1 agent running that has the 'Endpoint Security' and the 'System Integration'. This is running on a Windows file server and reporting into ELK which is running on Ubuntu. This one server is consuming 8gb data a day. I have circa 60 servers I would like to get on. 8gb seems like a lot of data. Is this right?

You can first start by messing with the metric consumption for the policy in fleet. You can start by expanding the duration between callbacks related to metrics

Hi @insurin

One way to start to track down what type of data is causing that 8gb a day is to look at how much data is each index, if the 8gb a day relative to the overall size of data in your Stack is enough to be statistically noticeable.

If you go to Stack Manager -> Index Management -> Data Streams you should be able to see a screen like the one below that gives the size of each index. Could you check that size across two days to see which index/indices are filling up? Or if you have an alternative way to try to triage where that data volume is coming from that would work too. Then we can try to find a way to lower that 8gb per day volume.

1 Like

I have disabled the elastic module on the Windows server to if the increase in storage size reduced and it pretty much has, See below for the screenshot. Last week to free up space I ran the commands below too

GET _cat/indices?v
DELETE /.ds-logs-elastic_agent.endpoint_security-default-2021.04.16-000001(this was 50gb)

settings below for the endpoint security integration

The largest index you have that is filling up so fast is Endpoint Security logs being ingested into Elasticsearch. Of course it's not good if logs about Endpoint are taking so much space, we'll reduce logging at the default level.

Did you happen to change the Agent log level in Fleet or is it still at the default info level?

1 Like

I don't recall changing it. Will keep an eye on it over the next few days.


I will add that for the time being a workaround is to disable collecting Agent logs in Fleet settings. That's not a good long term solution, because collecting a logs into Elasticsearch is useful when you want to dig into issues. We'll decrease this logging from Endpoint so that workaround is not needed long term.