[ Deployment: Filebeat as kube daemonset running on dozens of nodes directly sending data to ES(AWS ES) ]. ( ie no intermediate logstash involved ). I have Not configured or specified any pipelines in my filebeat config either.
In terms of processing load, where is the data processing taking place.
- Is the initial parsing happening at the filebeat node ?
- Are any processors executed at ES ? ( eg: rename/drop field etc )
- Specially where does "Groking" take place as specified in the filebeat modules ?
As I understand some processors like geo-ip/user-agent for nginx are executed on ES
as ingest pipelines.