Tips and Best Practices for working / debugging Logstash pipelines on Kubernetes

Hi everyone :wave:

I have setup a BLEK (Beats, Logstash, ElasticSearch, Kibana) stack on Kubernetes :stuck_out_tongue:

I would like to parse and enrich some logs output from my services collected through Filebeat.

Have you some tips and best-practices for working and iterating with Logstash pipelines ?

By example, I am wondering how to get Filebeat "raw" message sent to Logstash to be able to work with Grok debugger and define my filter rules ?

In the same way, how can I deploy my new rule but still be sure I do not lose any event ?
Naively, I was thinking about duplicating the stream:

  • 1 copy use the previous flow
  • 1 copy use my new defined flow and gets somewhere where I can validate it works as expected before deleting the previous flow

Thank you all for your tips and suggestions :slight_smile:

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.