File Storage being swallowed up

I have 1 agent running that has the 'Endpoint Security' and the 'System Integration'. This is running on a Windows file server and reporting into ELK which is running on Ubuntu. This one server is consuming 8gb data a day. I have circa 60 servers I would like to get on. 8gb seems like a lot of data. Is this right?

You can first start by messing with the metric consumption for the policy in fleet. You can start by expanding the duration between callbacks related to metrics

Hi @insurin

One way to start to track down what type of data is causing that 8gb a day is to look at how much data is each index, if the 8gb a day relative to the overall size of data in your Stack is enough to be statistically noticeable.

If you go to Stack Manager -> Index Management -> Data Streams you should be able to see a screen like the one below that gives the size of each index. Could you check that size across two days to see which index/indices are filling up? Or if you have an alternative way to try to triage where that data volume is coming from that would work too. Then we can try to find a way to lower that 8gb per day volume.

1 Like

I have disabled the elastic module on the Windows server to if the increase in storage size reduced and it pretty much has, See below for the screenshot. Last week to free up space I ran the commands below too

GET _cat/indices?v
DELETE /.ds-logs-elastic_agent.endpoint_security-default-2021.04.16-000001(this was 50gb)

settings below for the endpoint security integration

The largest index you have that is filling up so fast is Endpoint Security logs being ingested into Elasticsearch. Of course it's not good if logs about Endpoint are taking so much space, we'll reduce logging at the default level.

Did you happen to change the Agent log level in Fleet or is it still at the default info level?

1 Like

I don't recall changing it. Will keep an eye on it over the next few days.


agent

I will add that for the time being a workaround is to disable collecting Agent logs in Fleet settings. That's not a good long term solution, because collecting a logs into Elasticsearch is useful when you want to dig into issues. We'll decrease this logging from Endpoint so that workaround is not needed long term.

If prefer to stream Agent and Beat logs to Elasticsearch and just reduce Endpoint's logs there's another temporary workaround. You can change just Endpoint's log level using an Endpoint advanced policy option. If you do this, Endpoint will produce less logs to disk (c:\Program Files\Elastic\Endpoint\state\log\endpoint-*.log) as well as reduce the amount of logs stored in Elasticsearch.

To make that change, go to the Endpoint-specific Integration Policy for the host(s) generating too many logs, click "Show advanced settings" at the bottom of the page, enter error for the option windows.advanced.logging.file (and mac. and linux. variants as well if needed), then select "Save" to apply the change.

This will override the Fleet log level setting for just Endpoint. I recommend undoing this change after upgrading to 7.13.0 (when it's released). 7.13.0 should have far lower log volume with the default setting, and default logs are useful to diagnose other issues, if you should have any, down the road.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.