I have this somehow simple yet tricky workflow I am trying to setup with just Filebeat and Kafka.
In particular, I am trying to pipe Apache logs to Kafka via Filebeat but I would like to partition data on Kafka using the origin IP address (this because log ordering is preserved in Kafka only within a partition, and I am trying to preserve the ordering of events at least for a specific IP address).
The Kafka output component in Filebeat supports configuring a
key using a format string. Is there a way to actually set it to extract the IP address at the beginning of each Apache log entry? (e.g. with a regexp)?
I don't think that's possible as of today, as most of the processing is done later in the chain, by Ingest node or Logstash.
We are working on something that should help you on this task, but it will take some time to have it in: https://github.com/elastic/beats/pull/6925.
Something you can could do as of today is deploying logstash, and process logs there, then use the kafka output once the IP field is extracted.
I see. Yeah we were trying to keep the chain super thin, and avoid deploying Logstash with the whole Java framework. Let's see if we can find a workaround.
Thanks a lot for your answer!
BTW the PR you pointed me to got opened 3 days ago nice timings!
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.