Elasticsearch audit log take 20-25 GB size every day. I'm using elasticsearch 7.16.2 and same version for filebeat. I have enable the audit key by enabling the below keys in elasticsearch.yml file.
Below I have added filebeat.yml file configuration.
###################### Filebeat Configuration #########################
# ============================== Filebeat inputs ===============================
filebeat.inputs:
# filestream is an input for collecting log messages from files.
- type: filestream
# Change to true to enable this input configuration.
enabled: true
# Paths that should be crawled and fetched. Glob based paths.
paths:
- D:\Elastic\logs\elasticsearch_audit-*.json
@Nitin08bisht This will definitely have a big effect. This setting will log the full request body of every write and read, which results in more then double the amount of data then your ingesting. When I enable this, my Elastic cluster explodes lol.
Unfortunately this setting enables extensive auditing, which sometimes is needed for compliance etc. If only Elastic would give us the ability to enable this auditing setting only for some high sensitive indices.
Thanks for the solution. It really help me a lot. The size of elasticsearch auditlogs has reduced considerably as compared to earlier.
Before applying the below keys in elasticsearch.yml file the elasticsearch auditlogs file take too much size. But when I applied the below key, then it reduced the size of elasticsearch auditlogs file.
Yes agreed Elastic needs to provide better/ easier audit logging.
Yup totally true if you gave _all event types.
One thing you can do, is in the filebeat audit module add drop events that you are not interested does not contain the indices you are interested... not great but can be done.. there or in a/the ingest pipeline
@stephenb Thanks for the suggestion, but imho that's not an ideal solution for multiple reasons, mostly related to unnecessary load (big envs). Atm I'm filtering my audit logs in a Logstash pipeline, because the available filtering options in Filebeat are not granular enough tbh.
Totally agree... Not ideal... Logstash was not mentioned, that is good place to do the filtering... Better will be when elasticsearch provides better native granularity.
Audit logs are still taking 20gb size per day after set mentioned key as false. Requesting you to please look into this issue as me and Nitin are in the same team and facing the same issue.
Ahhh That is probably because your subscription level does not offer guided support. Only break fix... Nonetheless, this is still a community forum with no SLAs or promises.
I can refer you to the docs But you will need to look at them and figure out.
But the bottom line is if you have that high level rate of authentications and you want to audit each and every authentication, then it's going to take whatever space it's going to take so that you can have your audit logs. There's no magic fix.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.