In the past, most of the filebeat modules were using elasticsearch ingest pipelines to do the enrichment of documents. This means that they are as lightweight as possible, since the work is done centrally and scales well, since more ingest nodes can be deployed when needed. Logs are shipped in raw format, keeping the network load as small as possible. The introduction of elastic-agent and ingest-management integration packages makes it even simpler, since most of the module configuration like pipelines does not need to be shipped to the beats. Development or fixing of modules is easy since most of the work can be done with pipeline development on Kibana dev-tools.
Recently there have been a lot of new datasets and modules added to filebeat, what is great!
But it seems that now most the enrichment is done with javascript processors on filebeat itself. This means more performance requirements on the beats, more network load and more complex pipeline development.
Is there some roadmap on how the modules evolve in the future? I.e. enrichment centrally or distributed?
Regards
Bernhard