Logstash multiple output blocking: output blocking with multiple output using the isolation pattern

Hello everyone,
we have a trouble with a pipeline in logstash (7.17), with multiple output and isolator pattern.

We have adopted the solution suggested in:

In our case, input is Winlogbeat and the two outputs are: Elasticsearch and RabbitMQ.

The problem appear ONLY when we start logstash and an output is not reachable (eg: RabbitMQ is down at startup time of logstash).

In this case, nothing is send to Elastic, persistent queue of RabbitMQ it does not fill up and No page files are generated. ("obviously", also Elastic persistent queue is not filled).

BUT, if logstash start with both ouput running, and next we stop RabbitMQ, all data is sent to Elastic and RabbitMQ's queue is filled.

SO, my two questions are:

  1. why, the problem appear ONLY on startup of logstash AND ONLY if one (or more) output is unavailable?

  2. why, instead, if an output becomes unavailable after startup everything works regularly?

Moreover: we know that when a queue is full all stopped, but in our case during the logstash startup phase, the queue does not even start to fill itself, it remains in error on "rabbitmq" (because remote rabbitmq service is out of order at time of startup) without proceeding further (no data is send to Elasticsearch and queues remains empty)

Thanks :slight_smile:

Our configuration is like this:


# conf.d/pipelines.yml

- pipeline.id: winlogbeat
  path.config: "/etc/logstash/conf.d/winlogbeat.conf"

- pipeline.id: elasticoutput
  path.config: "/etc/logstash/conf.d/elasticoutput.conf"
  queue.type: persisted

- pipeline.id: rabbitoutput
  path.config: "/etc/logstash/conf.d/rabbitoutput.conf"
  queue.type: persisted

# conf.d/winlogbeat.conf

input {
    beats {
        port => 5044
    }
 }

 output {
        pipeline {
                send_to => [outputElastic]
        }

        pipeline {
                send_to => [outputRabbit]
        }
}

# Note: we also tried this too, but with same results :( :
output {
        pipeline {
                send_to => [outputElastic, outputRabbit]
        }
}

# conf.d/elasticoutput.conf

input {
    pipeline {
        address => outputElastic
    }
}

output {
        elasticsearch {
                hosts => ["http://127.0.0.1:9200"]
                ....
        }
}

# conf.d/rabbitoutput.conf

input {
    pipeline {
        address => outputRabbit
    }
}

output {
        rabbitmq {
            host => "192.168.0.88"
            port => 5671
            exchange => "myexch"
            key => "mykey"
            exchange_type => "direct"
            codec => "json"
            ...
        }
}

The rabbitmq output calls the mixin to connect to RabbitMQ. The mixin should be logging an error once per second saying it is retrying the connection.

Since the output never gets out of the register function, the pipeline cannot start, so logstash does not process any data.

If it loses the connection to RabbitMQ once the pipeline is running then it will reconnect.

1 Like

appears to be a known unsolved issue :frowning:

Interesting to note that the rabbitmq input plugin does not make the connection in register(), it connects in run().

Since the publish() function of the output reconnects it might be possible to modify the register() so that it does not connect and relies on publish() doing so. However, publish has code that verifies the initial connection was made.

1 Like

I put my hands in the code and I solved the bug (YES, it is a BUG and it's not so hard to fix it, very few changes are needed). At the moment I need some more test, but after this I try to submit code on github :slight_smile:

Thanks for your suggestion, you opened my eyes :slight_smile:

1 Like