I have a scenario where I need to push logs from edge nodes to central ES. I have to process multiline log messages and the logs messages need to use dissect/grok plugin behaviour to extract parts of it into fields like date, process etc. Filebeat solves the purpose of pushing and multiline parsing but does not offer grok. The other option is to use LS forwarder. However, I am not sure whether the plan is to maintain it or it will be deprecated in favour of filebeat. Also, there are multiple files that are to be pushed. I am not sure if logstash with multiline codec will work properly with multiple file sources.
logstash-forwarder was deprecated a long time ago and replaced by Filebeat, which has a lot more capabilities and is continuously being developed and maintained. I would recommend you use Filebeat to collect logs and perform any multiline processing before sending the data on to Logastash for further processing and enrichment.
@Christian_Dahlqvist thanks for the prompt response. I did not expect a response on a Sunday.
- Can I not write a custom processor for filebeats to do the job?
- If I have to do centralized grokking, is it wise to have logstash and ES on the same server and have multiple ES nodes talking to each other for replication? For example, on 10 machines, I can have LS and ES. Each machine's LS only talks to its ES. All ES can talk to each other to form a cluster and offer redundancy.
Filebeat is designed to use as little resources as possible, which is why CPU intensive processing like e.g. grok is not supported. You can naturally fork Filebeat and add whatever logic you like, but you would end up with a custom solution you would have to maintain going forward.
It is always recommended to deploy Logstash and Elasticsearch on dedicated hosts as it reduces resource contention and makes it easier to troubleshoot.
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.