Reload Logstash config on shutdown

Hi everyone,

we have a multiple-node logstash cluster using a distributor pipeline that distributes events depending on their types, eg

input {
  beats {
      # ...
    }
  }
output {
    if [type] == "foo" {
      pipeline {
        send_to => foo
      }
    } else if [type] == "bar" {
      pipeline {
        send_to => bar
      }
   } else if [type] == "baz" {
      pipeline {
        send_to => baz
      }
  # ...

For this to work we need an entry in pipelines.yaml for every pipeline. Imagine this to be:

- pipeline.id: foo
  path.config: "/etc/logstash/conf.d/foo/*.conf"
- pipeline.id: bar
  path.config: "/etc/logstash/conf.d/bar/*.conf"

Here comes the problem I face. I somehow managed to add this new type "baz" to my input without specifying a pipeline for it in pipelines.yaml. This leads to some internal logstash-queues filling up and in the end block all events. In the logs I see

Attempted to send event to 'baz' but that address was unavailable. Maybe the destination pipeline is down or stopping? Will Retry.

Everything's fine so far. I can edit the pipelines.yaml and add my pipeline baz while logstash is running and the events get processed again.

But: On my first logstash-node I was too fast typing a systemctl stop logstash. Now it hangs in a shutdown because it tries to send the in-queue events to pipeline baz and it will not reload it's config now, after I added the missing pipeline to pipelines.yaml.

Is there something else I can do despite kill the process and lose events?
Thank you so much for your answers!

LS can reload .conf files for pipelines with the settings: config.reload.automatic: true but not own params from .yml. To reload .yml you have to restart LS.

Set ensure_delivery to false, See the documentation. That prevents the code looping forever.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.