FileBeat for parsing

I know that FileBeat was not built for parsing data, rather for filtering and shipping specific data!

However, looking at the current landing page of processors, I realized that if the data is not too complicated, we could possibly parse the data and index it to Elasticsearch.

The question, I wanted to ask, is that would a config with a few conditionals and dissect perform better on FileBeat rather than Logstash? (considering that FileBeat is written in Go)

I know an answer to that question would be to test it, which I plan to in the coming week. But, over the years, I have observed FileBeat add a lot of these processors which are basically a drop in replacement for Logstash in a sense, so just wanted to understand the reasoning for this approach.

Logstash is definitely a resource hog and has issues running in Kubernetes, but works surprisingly well in Docker Swarm. Beats however, do not have these issues, and I was thinking of replacing Logstash with Beats wherever I could. :hotsprings:

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.