Questions regarding auto-reload and encryption

Apologies beforehand for the dilettante question.

I have an up and running Logstash configuration. Now and then I edit my grok parser and then I always have to type systemctl restart logstash so that the latest config is applied (I guess I'm losing logs until the pipe is back online).

I already read this this article.
However, for some reason this is still clear as mud to me.
So my questions are as follows:

  1. If I simply change the --config.reload.automatic to True does that mean that after the changes I do to the grok filter the config will be auto reload on its own (depending on the interval I'm checking for any new stuff)?
  2. And another question, whilst the pipeline is reloading (no mater automatic or me doing it manually) I'm loosing logs I am not? And if yes what can I do, Persistent queue maybe (but currently the logs are coming via UDP, although a friend told me to get them via TCP and encrypted)? To my understanding if the logs are coming from Syslog via UDP PQ is not a viable option?
  1. Yes.
  2. If it's UDP, then yes they are lost.

Thank you @warkolm
And how about the kill -SIGHUB [logstashPID] (let's say for now I'll refrain from using the auto-reload option).
If I do that instead of systemctl restart logstash would that only restart the pipe without the whole daemon, meaning that I will not loose any log entries?

Also, the part below I don't understand?

Changes to grok pattern files are also reloaded, but only when a change in the config file triggers a reload (or the pipeline is restarted).

If you store grok patterns in a file using the patterns_dir option then if the grok filter is reloaded due to a change in a configuration file then the pattern file will be re-read. However, changing the pattern file does not trigger a reload.

1 Like

Indeed you are losing logs, you need a persistent queue solution independent of Logstash, maybe you could try an architecture that involves Kafka.

https://elastic-stack.readthedocs.io/en/latest/e2e_kafkapractices.html

1 Like

Just to add that if you are using systemd to start/restart/stop logstash and want to reload a pipeline when the configuration is changed, you would need to add the following lines in the logstash.yml.

config.reload.automatic: true
config.reload.interval: 30s

This will make logstash look for changes in the pipelines config file every 30 second and if a pipeline was changed, it will reload the pipeline.

1 Like

If you store grok patterns in a file using the patterns_dir option then if the grok filter is reloaded due to a change in a configuration file then the pattern file will be re-read. However, changing the pattern file does not trigger a reload.

Thank you for elaborating @Badger,

Indeed you are losing logs, you need a persistent queue solution independent of Logstash, maybe you could try an architecture that involves Kafka.

much appreciated @Iker, is it possible that you educate me a bit about what a message broker is? Also I though Logstash has a PQ feature? I guess I was wrong?

Just to add that if you are using systemd to start/restart/stop logstash and want to reload a pipeline when the configuration is changed, you would need to add the following lines in the logstash.yml .

config.reload.automatic: true
config.reload.interval: 30s

This will make logstash look for changes in the pipelines config file every 30 second and if a pipeline was changed, it will reload the pipeline.

Many Thanks @leandrojmp, so this won't kill the process but will just reload the pipeline effectively doing the same as SIGHUP (I'm speaking out of my a*s a bit here lol)?