Explanation of the principle of aggregation and log normalization


I would like to understand please at what level are the aggregation and normalization of ELK logs, is it done on logstash or Elasticsearch on the pipeline for Beats. and if possible I would like to have an example of result of aggregation and normalization of the logs.
Sometimes I find that both logstash and Elasticsearch can aggregate and normalize logs.
I have to present ELK and confirm that it checks all the features of a SIEM by giving examples of :
• Collection
• Aggregation and normalization
• Storage and archiving
• Analysis and alerting

Thank you in advance for your assistance

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.