Im concern about the storage utility in my Elastic/Kibana/Logstash instance, is only 4 months up and 40 fleet agents and already used 500gb, is a normal behaviour?
You are running all those integrations in all agents? It does not make much sense.
Do you have one single policy for all agents or do you have multiple policies? It is expected to have multiple policies.
But from what you shared the amount of data seems to be ok, it is basically something around 100 MB per host per day, which is reasonable when you have things like metrics and an xdr (elastic defend) running, both can get a lot of logs.
Thanks for the reply.
Will configure Index Lifecycles to avoid the high use of the storage in the VM.
If I configure to move the index to another VM, will can import again in case to check any log in the future?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.