I have setup a
BLEK (Beats, Logstash, ElasticSearch, Kibana) stack on Kubernetes
I would like to parse and enrich some logs output from my services collected through
Have you some tips and best-practices for working and iterating with
Logstash pipelines ?
By example, I am wondering how to get
Filebeat "raw" message sent to
Logstash to be able to work with Grok debugger and define my filter rules ?
In the same way, how can I deploy my new rule but still be sure I do not lose any event ?
Naively, I was thinking about duplicating the stream:
- 1 copy use the previous flow
- 1 copy use my new defined flow and gets somewhere where I can validate it works as expected before deleting the previous flow
Thank you all for your tips and suggestions