Fit size of filebeat

Hi,

Generally filebeat mapping creating a lot of unused fields in my elasticsearch environment. I wonder, because size of the logs are excessivelly large (around 5-6 GB per day). Can I somehow delete unused mapping from index? Because if this is DB-like, I don't need table where every record contains around thousand nulls. How can I fit it, or it isn't use additional disk space?

Please correct me if I am wrong, but for comparrison, the same logs from the same source in wazuh weight like 8-10 times less.

Thanks in advance for reply and have a good day :slight_smile:

There is a similar product called Fleet that uses a different model of setting fields. It will configure only those fields that you're using.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.