Logstash to logstash not indexing logs when is enabled

Hi, we are facing an issue using Logstash to Logstash configuration Logstash-to-Logstash Communication | Logstash Reference [7.16] | Elastic. Our architecture is the following using two different inputs in filebeat:

Filebeat > logstash_1 |> logstash_2 > Humio
|> Elasticsearch

We have tested this configuration in a dev environment and we successfully sent data without dropping any events to Elasticsearch and Humio.

Once we set it in production and when we enabled the second ingestion file in filebeat we received the following output in the logstash_1.

[2021-12-21T06:23:36,969][ERROR][logstash.outputs.lumberjack][main][7fcf7e2f8dd1636c7ebe357fbf031a10d6b8fe8eff71ed92cd85030c6131a5e3] Client write error, trying connect {:e=>#<IOError: Connection reset by peer>, :backtrace=>["org/jruby/ext/openssl/SSLSocket.java:950:in `syswrite'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/jls-lumberjack-0.0.26/lib/lumberjack/client.rb:107:in `send_window_size'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/jls-lumberjack-0.0.26/lib/lumberjack/client.rb:127:in `write_sync'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/jls-lumberjack-0.0.26/lib/lumberjack/client.rb:42:in `write'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-lumberjack-3.1.7/lib/logstash/outputs/lumberjack.rb:65:in `flush'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/buffer.rb:219:in `block in buffer_flush'", "org/jruby/RubyHash.java:1415:in `each'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/buffer.rb:216:in `buffer_flush'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/stud-0.0.23/lib/stud/buffer.rb:159:in `buffer_receive'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-lumberjack-3.1.7/lib/logstash/outputs/lumberjack.rb:52:in `block in register'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-codec-json-3.0.5/lib/logstash/codecs/json.rb:42:in `encode'", "/usr/share/logstash/logstash-core/lib/logstash/codecs/delegator.rb:48:in `block in encode'", "org/logstash/instrument/metrics/AbstractSimpleMetricExt.java:65:in `time'", "org/logstash/instrument/metrics/AbstractNamespacedMetricExt.java:64:in `time'", "/usr/share/logstash/logstash-core/lib/logstash/codecs/delegator.rb:47:in `encode'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-lumberjack-3.1.7/lib/logstash/outputs/lumberjack.rb:59:in `receive'", "/usr/share/logstash/logstash-core/lib/logstash/outputs/base.rb:105:in `block in multi_receive'", "org/jruby/RubyArray.java:1809:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/outputs/base.rb:105:in `multi_receive'", "org/logstash/config/ir/compiler/OutputStrategyExt.java:143:in `multi_receive'", "org/logstash/config/ir/compiler/AbstractOutputDelegatorExt.java:121:in `multi_receive'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:295:in `block in start_workers'"]}

After that, the error doesn't appear again and we are receiving logs in Humio but we cannot see events in our Elasticsearch. The logs only show some event drops due to length of a field but they are also shown when we disconnect the second logstash.

The amount of logs that usually gets Elasticsearch is around 1M in 5 minutes.

Any ideas?? Does logstash have a limitation in the input received from Filebeat and cannot handle that amount of alerts?

Another behavior that we found using logstash is that if one of the outputs fails on the connection the pipeline doesn't reach the next output and it seems to be tucked with the same log event sent from filebeat. Is this a normal behavior? Why logstash doesn't work with threads for each output?? I've tried using pipeline.workers but this is not fixed.

Tell me if you need more info. Hope you can help me.

logstash has an at-least-once delivery model. If an output is unable to accept events then back-pressure will prevent the pipeline from processing anything.

Take a look at pipeline-to-pipeline communication. Read the section on delivery guarantees, and the section about the output isolator pattern.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.