Since I have moved my logging infrastructure to elastic 6.3 I'm evaluating to split and simplifying our existing long chain of 'if' into multiple pipeline-to-pipeline files that end to something like 20 different pipelines.
I have noticed that, by time to time, some of the logstash node fails to start / access one of those pipeline with message like:
"Attempted to send event to 'namedPipeline' but that address was unavailable. Maybe the destination pipeline is down or stopping? Will Retry."
That same configuration is then loaded from other nodes (or after the restart) and, when it occurs, it happens to different pipeline.
Is that a know issue? Am I pushing the pipeline-to-pipeline and the distributor pattern too far with so many pipelines?
I have 1 input pipeline (receving events from filebeat agents), up to 24 processing pipelines and 2 output pipelines (one for elasticsearch and another one for syslog, which is used to deliver some data to a splunk forwarder)