Logstash-output-elasticsearch Connection reset while trying to send bulk request to elasticsearch

We are encountering an issue in our Logstash cluster and would appreciate your assistance in investigating and resolving it.

Upon starting the Logstash cluster, we are consistently observing the following warning message:

org.logstash.beats.InvalidFrameProtocolException: Invalid version of beats protocol: 71

Subsequently, we notice that Elasticsearch starts returning the following error:

Failed to perform request {:message=>"Connection reset by peer", :exception=>Manticore::SocketException, :cause=>#<Java::JavaNet::SocketException: Connection reset by peer>}

To further analyze this, we captured a TCP dump and observed that:

  • A _bulk API call is initiated from Logstash to Elasticsearch.
  • Several packets are transferred successfully.
  • The connection is then reset by the Elasticsearch node during the bulk data transfer.

We have verified that there are no resource constraints (memory, CPU, or load issues) on either the Logstash or Elasticsearch nodes at the time of the incident.

We would appreciate your guidance on:

  1. Possible causes for receiving an invalid Beats protocol version (71), and how to trace the origin of such connections.
  2. Reasons why Elasticsearch might reset the connection during a _bulk API operation, despite no apparent system resource or cluster health issues.
  3. Any recommended diagnostic steps or configuration validations on both Logstash and Elasticsearch sides to help isolate and resolve this issue.

If required, we can share the TCP dump files, relevant Logstash and Elasticsearch logs, and configuration details.

Thank you in advance for your assistance.

@Advay

Welcome to the community.

Seems like this is perhaps a duplicate of the other topic.

In short, for any of us to help we need the following:

We need the

  • The beats output configuration
  • The logstash input configuration
  • The logstash elasticsearch output configuration
  • Versions of all
  • How did you install elasticsearch,
  • and you curl the elasticsearch endpoint, if so please share that command

This looks like it is a duplicate of the other thread we will close that thread...

+ output.logtash or all configuration from filebeat.yml

1 Like

The protocol version is (according to the original lumberjack protocol, which is not the current protocol) just the first byte across the wire. So if someone sends you a message that isn't lumberjack then it will provoke this error message.

If I try to connect to a beats input using http then it logs this message for protocol versions 71 and 69. If I connect using https then it is logged for versions 22 and 3.

1 Like

Dear @stephenb , @Rios ,

Thanks for you response. we have licensed distribution. Please find the required configuration details.

Filebeat:

filebeat.inputs:
- type: filestream
  id: input-1
  enabled: true
  paths:
    - /path/to/logfile.log
  processors:
    - dissect:
        tokenizer: "%{} %{} %{} %{} %{} %{} %{} %{parsed_field}"
        field: "message"
        target_prefix: ""
    - decode_json_fields:
        fields: ['parsed_field']
        process_array: false
        max_depth: 1
        target: ""
        overwrite_keys: false
        add_error_key: true
    - drop_fields:
        fields: ['parsed_field', 'event', 'message']
    - add_fields:
        target: ''
        fields:
          app_id: app_logs
          environment: production
          component: access_log
    - script:
        lang: javascript
        source: >
          function process(event){
            var component = event.Get("component");
            event.Put("@metadata.beat", "filebeat-" + component);
          }

- type: filestream
  id: input-2
  enabled: true
  paths:
    - /path/to/outgoing/access.log
  processors:
    - dissect:
        tokenizer: "%{timestamp->} %{elapsed_time} %{client_ip} %{result_code}/%{status_code} %{size} %{method} %{uri} %{user} %{target_status}/%{target_ip} %{content_type}"
        field: "message"
        target_prefix: "outgoing"
    - add_fields:
        target: ''
        fields:
          app_id: outgoing_app
          environment: production
          component: outgoing
    - script:
        lang: javascript
        source: >
          function process(event){
            var component = event.Get("component");
            event.Put("@metadata.beat", "filebeat-" + component);
            var uri = event.Get("outgoing.uri");
            if(uri != null && uri.length > 0){
               var parts = uri.split(":");
               if(parts.length > 1){
                   event.Put("outgoing.uri_host", parts[0]);
                   event.Put("outgoing.uri_port", parts[1]);
               }
            }
          }

- type: filestream
  id: default-input
  enabled: false
  paths:
    - /var/log/*.log

filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false

setup.template.settings:
  index.number_of_shards: 1

setup.kibana:

output.logstash:
  hosts: ["<logstash-host>:5044"]

processors:
  - add_host_metadata:
      when.not.contains.tags: forwarded
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - add_kubernetes_metadata: ~

logging.level: info
logging.to_files: true
logging.files:
  path: /var/log/filebeat
  name: filebeat-info
  keepfiles: 10
  permissions: 0640

Input Logstash:

input {
  beats {
    port => 5044
  }
}

filter {
  mutate {
    remove_field => [ "xxxxx"]
  }
 if [host][ip] == "xx.xx.xx.xx" {
    drop { }
  }
}

Logstash output :


{
  if [@metadata][pipeline] {
    elasticsearch {
      hosts    => [ "http://es-host1:9200", "http://es-host2:9200", "http://es-host3:9200" ]
      index    => "%{[@metadata][beat]}"
      pipeline => "%{[@metadata][pipeline]}"
      user     => "username"
      password => "password"
      action   => "create"
      timeout  => 90
    }
  } else {
    elasticsearch {
      hosts    => [ "http://es-host1:9200", "http://es-host2:9200", "http://es-host3:9200" ]
      index    => "%{[@metadata][beat]}"
      user     => "username"
      password => "password"
      action   => "create"
      timeout  => 90
    }
  }
}

Versions:

     filebeat: 8.6.2
     metricbeat:8.6.2
     logstash: 8.6.2

As per the logs, our system is sending bulk insert requests to the following Elasticsearch endpoint:

http://<elasticsearch-host>:9200/_bulk

Kindly review the current configuration and advise if any adjustments or optimizations are needed to address the issue. Your guidance will be greatly appreciated.

Thank you in advance for your support.

There are none. At the very least it would try and tear the connection down gracefully with a FIN first, but I don't think it'd even do that without sending some kind of response. If you're seeing an abrupt RST then it's not coming from Elasticsearch, it'll be something else on the network path.

Can you:

  1. Comment lines 19-20, first you had parsed fields, than dropped, not sure why:
    # - drop_fields:
        # fields: ['parsed_field', 'event', 'message']
  1. Disable input-2 , temporarily
- type: filestream
  id: input-2
  enabled: false
  1. Use console output instead of directly to send to LS.
# output.logstash:
  # hosts: ["<logstash-host>:5044"]

output.console:
  pretty: true
  enabled: true

If this is working as you expected,you can uncomment output.logstash and comment output.console, and send data to LS.

If input-1 is working as you expected, disable input-1,enable input-2. Test it again.
Be aware that you will have to delete datapath\registry\filebeat\data.json file.

On LS side:

  1. Add debugging in the output:
output{ 
   stdout { codec => rubydebug{} } # do not use in the production, it's slow down performances

 if [@metadata][pipeline] { ...
}

Track events activity, are those events which you are looking for.
2. You can remove/comment action => "create", no need to set, it's by default. Less code, less confusion.
Not sure why you are sending events to the same index [@metadata][beat], you can use [fields][app_id] or component to separate data. I assume, you want in some cases use ingest/transform messages.

Thank you for your response. This is a production environment issue, and I am currently working on implementing the suggested changes.

We’ve observed that the system functions normally for the first 1–2 hours without any connection issues. During this period, Metricbeat continues sending data to Elasticsearch successfully. However, after this interval, Logstash begins writing data to its queue, and once the queue size exceeds 5–10 messages/files, the pipeline stops processing entirely.

Considering this behavior, could an invalid protocol error or a possible misconfiguration within the Logstash pipeline be contributing to the issue we’re encountering?

Your guidance and recommendations on this would be greatly appreciated.

could an invalid protocol error or a possible misconfiguration within the Logstash pipeline be contributing to the issue we’re encountering?

Not sure, however I share opinions of David and Badger,this is not related to FB/LS soft. issues since is only HTTP in use, from FB until ES.

No matter is it production or not, you can add output.console or output.file in filebeat.yml, or temporarily stdout { codec => rubydebug{} } in your.conf.
Since you're mentioned that is the production environment, do not delete data.json, just rename it. That registry file keeps records about processed files.

Please set log.level: debug in logstash.yml and provide us with logstash-plain.log

Btway, do you have enough disk space on FB/LS? At least 1-2 free GB.

Also maybe other will have more ideas.

In my experience of cases like this it's very unlikely that the application logs (either in Logstash or Elasticsearch) contain anything of value. Logstash is seeing an unexpected RST, of which Elasticsearch isn't the origin. The only way to reliably pin this down is to go along the network path step-by-step (either checking logs on intermediary devices or taking packet captures) until you find out where the RST is really coming from.

1 Like

When do you see the protocol errors precisely? Before or after logstash starts queuing messages?

Dear @RainTown ,

We observe protocol error before Logstash starts queueing messages.

@DavidTurner, I fully agree with you.
However these are issues:

  • FB->LS - Invalid version of beats protocol: 71, on simple http connection. Not sure is there any progress on this. Maybe to add the connectivity timeout
  • LS - pipeline stops processing, no resource constraints, in jvm.options is default value 1 GB, maybe can be increased, if it is not already set.
  • ES - "Connection reset by peer", might be on OS/network level

Ah ok I see, these things are not obviously related - I was going off of the title of the thread about the Connection reset but yeah there's other issues here too.

**Dear @Rios, @DavidTurner **

  • Filebeat to Logstash (FB→LS): We have multiple data sources and have identified one source sending HTTP requests. A request has been raised to update this configuration accordingly.
  • Logstash (LS): The JVM has been allocated 32 GB out of a total 62GB. Current observations indicate that JVM utilization peaks at 19 GB. Kindly let us know if you recommend tuning the JVM configuration.
  • Elasticsearch (ES): Logstash metrics are also being continuously pushed to Elasticsearch without any issues. Additionally, the network team has confirmed there are no issues from their side.

Thanks for your continued support.

How have they determined this? In my experience this is what network teams always say, at least until you show them packet captures demonstrating the problem.

It might be that the RSTs are happening by design (e.g. an overenthusiastic firewall config or similar) in which case you either need to fix the design or else accept that this is a feature of the environment in which you're operating.

Dear @DavidTurner ,

Yes, we understand, and we’re currently in discussion with the network team.

Could you please advise if any JVM tuning is required as well?

Maybe I’m missing something, but you listed 3 filebeat.inputs above. Do you mean in addition to them? Or something else entirely (not filebeat) is sending stuff directly to logstash that you previously didn’t know about? Or … ?

Personally, I wouldn’t chase the “connection reset by peer” messages too aggressively yet, until the previous steps are cleaned up a bit. I concur with @DavidTurner though on the lack of much confidence in

All that usually establishes is that the switches / routers / firewalls / … are online and reachable where the network team expect them to be.

In the other thread referenced it was established that logstash can close TCP connections from/to filebeat via a RST, after an interval of zero “real” traffic. I mused this was a bit untidy, but I’d personally not consider it a bug. Others may take a different view.

A Connection reset error indicates a TCP connection encountered a RST packet.

That's not to comment either way on whether your configuration is otherwise optimal, but in the context of this thread about a Connection reset message the only solution is to find the source of the RST packet and stop it from doing what it's doing.

Dear @DavidTurner @RainTown ,

After enabling debug-level logging, the following sequence of events was observed on the Logstash instance:
Logstash received a connection reset (RST) from the destination endpoint. Following this, document ingestion into Elasticsearch was interrupted.
➤ Connection Reset Event


[2025-06-25T19:14:56,527][INFO ][logstash.outputs.elasticsearch][main][cf8bbb5d374a47085c47bba00722e4720911402308b1364c802249d8378925df]
Failed to perform request {:message=>"Connection reset", :exception=>Manticore::SocketException, :cause=>#<Java::JavaNet::SocketException
: Connection reset>, :backtrace=>["/usr/share/logstash/vendor/bundle/jruby/2.6.0/gems/manticore-0.9.1-java/lib/manticore/response.rb:36:i
n `block in initialize'", "/usr/share/logstash/vendor/bundle/jruby/2.6.0/gems/manticore-0.9.1-java/lib/manticore/response.rb:79:in `call'
", "/usr/share/logstash/vendor/bundle/jruby/2.6.0/gems/logstash-output-elasticsearch-11.12.4-java/lib/logstash/outputs/elasticsearch/http
_client/manticore_adapter.rb:73:in `perform_request'", "/usr/share/logstash/vendor/bundle/jruby/2.6.0/gems/logstash-output-elasticsearch-
11.12.4-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:324:in `perform_request_to_url'", "/usr/share/logstash/vendor/bundle/
jruby/2.6.0/gems/logstash-output-elasticsearch-11.12.4-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:311:in `block in perfo
rm_request'", "/usr/share/logstash/vendor/bundle/jruby/2.6.0/gems/logstash-output-elasticsearch-11.12.4-java/lib/logstash/outputs/elastic
search/http_client/pool.rb:398:in `with_connection'", "/usr/share/logstash/vendor/bundle/jruby/2.6.0/gems/logstash-output-elasticsearch-1
1.12.4-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:310:in `perform_request'", "/usr/share/logstash/vendor/bundle/jruby/2.
6.0/gems/logstash-output-elasticsearch-11.12.4-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:318:in `block in Pool'", "/usr
/share/logstash/vendor/bundle/jruby/2.6.0/gems/logstash-output-elasticsearch-11.12.4-java/lib/logstash/outputs/elasticsearch/http_client.
rb:176:in `bulk_send'", "/usr/share/logstash/vendor/bundle/jruby/2.6.0/gems/logstash-output-elasticsearch-11.12.4-java/lib/logstash/outpu
ts/elasticsearch/http_client.rb:154:in `bulk'", "/usr/share/logstash/vendor/bundle/jruby/2.6.0/gems/logstash-output-elasticsearch-11.12.4
-java/lib/logstash/plugin_mixins/elasticsearch/common.rb:312:in `safe_bulk'", "/usr/share/logstash/vendor/bundle/jruby/2.6.0/gems/logstas
h-output-elasticsearch-11.12.4-java/lib/logstash/plugin_mixins/elasticsearch/common.rb:245:in `submit'", "/usr/share/logstash/vendor/bund
le/jruby/2.6.0/gems/logstash-output-elasticsearch-11.12.4-java/lib/logstash/plugin_mixins/elasticsearch/common.rb:190:in `retrying_submit
'", "/usr/share/logstash/vendor/bundle/jruby/2.6.0/gems/logstash-output-elasticsearch-11.12.4-java/lib/logstash/outputs/elasticsearch.rb:
382:in `multi_receive'", "org/logstash/config/ir/compiler/AbstractOutputDelegatorExt.java:121:in `multi_receive'", "/usr/share/logstash/l
ogstash-core/lib/logstash/java_pipeline.rb:301:in `block in start_workers'"]}
[2025-06-25T19:14:56,527][DEBUG][logstash.outputs.elasticsearch.httpclient.manticoreadapter][main][cf8bbb5d374a47085c47bba00722e472091140
2308b1364c802249d8378925df]
java.net.SocketException: Connection reset
        at sun.nio.ch.NioSocketImpl.implRead(sun/nio/ch/NioSocketImpl.java:323) ~[?:?]
        at sun.nio.ch.NioSocketImpl.read(sun/nio/ch/NioSocketImpl.java:350) ~[?:?]
        at sun.nio.ch.NioSocketImpl$1.read(sun/nio/ch/NioSocketImpl.java:803) ~[?:?]
        at java.net.Socket$SocketInputStream.read(java/net/Socket.java:966) ~[?:?]
        at org.apache.http.impl.io.SessionInputBufferImpl.streamRead(org/apache/http/impl/io/SessionInputBufferImpl.java:137) ~[httpcore-
4.4.14.jar:4.4.14]
        at org.apache.http.impl.io.SessionInputBufferImpl.fillBuffer(org/apache/http/impl/io/SessionInputBufferImpl.java:153) ~[httpcore-
4.4.14.jar:4.4.14]
        at org.apache.http.impl.io.SessionInputBufferImpl.readLine(org/apache/http/impl/io/SessionInputBufferImpl.java:280) ~[httpcore-4.
4.14.jar:4.4.14]
        at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(org/apache/http/impl/conn/DefaultHttpResponseParser.java:138) ~[
httpclient-4.5.13.jar:4.5.13]
        at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(org/apache/http/impl/conn/DefaultHttpResponseParser.java:56) ~[h
ttpclient-4.5.13.jar:4.5.13]
        at org.apache.http.impl.io.AbstractMessageParser.parse(org/apache/http/impl/io/AbstractMessageParser.java:259) ~[httpcore-4.4.14.
jar:4.4.14]
        at org.apache.http.impl.DefaultBHttpClientConnection.receiveResponseHeader(org/apache/http/impl/DefaultBHttpClientConnection.java
:163) ~[httpcore-4.4.14.jar:4.4.14]
        at org.apache.http.impl.conn.CPoolProxy.receiveResponseHeader(org/apache/http/impl/conn/CPoolProxy.java:157) ~[httpclient-4.5.13.
jar:4.5.13]
        at org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(org/apache/http/protocol/HttpRequestExecutor.java:273) ~[httpco
re-4.4.14.jar:4.4.14]
        at org.apache.http.protocol.HttpRequestExecutor.execute(org/apache/http/protocol/HttpRequestExecutor.java:125) ~[httpcore-4.4.14.
jar:4.4.14]
        at org.apache.http.impl.execchain.MainClientExec.execute(org/apache/http/impl/execchain/MainClientExec.java:272) ~[httpclient-4.5
.13.jar:4.5.13]
        at org.apache.http.impl.execchain.ProtocolExec.execute(org/apache/http/impl/execchain/ProtocolExec.java:186) ~[httpclient-4.5.13.
jar:4.5.13]
        at org.apache.http.impl.execchain.RetryExec.execute(org/apache/http/impl/execchain/RetryExec.java:89) ~[httpclient-4.5.13.jar:4.5
.13]
        at org.apache.http.impl.execchain.RedirectExec.execute(org/apache/http/impl/execchain/RedirectExec.java:110) ~[httpclient-4.5.13.
jar:4.5.13]
        at org.apache.http.impl.client.InternalHttpClient.doExecute(org/apache/http/impl/client/InternalHttpClient.java:185) ~[httpclient
-4.5.13.jar:4.5.13]
        at org.apache.http.impl.client.CloseableHttpClient.execute(org/apache/http/impl/client/CloseableHttpClient.java:72) ~[httpclient-
4.5.13.jar:4.5.13]
        at org.apache.http.impl.client.CloseableHttpClient.execute(org/apache/http/impl/client/CloseableHttpClient.java:221) ~[httpclient
-4.5.13.jar:4.5.13]
        at org.apache.http.impl.client.CloseableHttpClient.execute(org/apache/http/impl/client/CloseableHttpClient.java:165) ~[httpclient
-4.5.13.jar:4.5.13]
        at jdk.internal.reflect.GeneratedMethodAccessor33.invoke(jdk/internal/reflect/GeneratedMethodAccessor33) ~[?:?]
        at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(jdk/internal/reflect/DelegatingMethodAccessorImpl.java:43) ~[?:?]
        at java.lang.reflect.Method.invoke(java/lang/reflect/Method.java:568) ~[?:?]
        at org.jruby.javasupport.JavaMethod.invokeDirectWithExceptionHandling(org/jruby/javasupport/JavaMethod.java:487) ~[jruby.jar:?]
        at org.jruby.javasupport.JavaMethod.invokeDirect(org/jruby/javasupport/JavaMethod.java:342) ~[jruby.jar:?]
        at usr.share.logstash.vendor.bundle.jruby.$2_dot_6_dot_0.gems.manticore_minus_0_dot_9_dot_1_minus_java.lib.manticore.response.cal
l(/usr/share/logstash/vendor/bundle/jruby/2.6.0/gems/manticore-0.9.1-java/lib/manticore/response.rb:49) ~[?:?]
        at usr.share.logstash.vendor.bundle.jruby.$2_dot_6_dot_0.gems.logstash_minus_output_minus_elasticsearch_minus_11_dot_12_dot_4_min
us_java.lib.logstash.outputs.elasticsearch.http_client.manticore_adapter.perform_request(/usr/share/logstash/vendor/bundle/jruby/2.6.0/ge
ms/logstash-output-elasticsearch-11.12.4-java/lib/logstash/outputs/elasticsearch/http_client/manticore_adapter.rb:73) ~[?:?]
        at usr.share.logstash.vendor.bundle.jruby.$2_dot_6_dot_0.gems.logstash_minus_output_minus_elasticsearch_minus_11_dot_12_dot_4_min
us_java.lib.logstash.outputs.elasticsearch.http_client.pool.perform_request_to_url(/usr/share/logstash/vendor/bundle/jruby/2.6.0/gems/log
stash-output-elasticsearch-11.12.4-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:324) ~[?:?]
        at usr.share.logstash.vendor.bundle.jruby.$2_dot_6_dot_0.gems.logstash_minus_output_minus_elasticsearch_minus_11_dot_12_dot_4_min
us_java.lib.logstash.outputs.elasticsearch.http_client.pool.perform_request(/usr/share/logstash/vendor/bundle/jruby/2.6.0/gems/logstash-o
utput-elasticsearch-11.12.4-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:311) ~[?:?]
        at usr.share.logstash.vendor.bundle.jruby.$2_dot_6_dot_0.gems.logstash_minus_output_minus_elasticsearch_minus_11_dot_12_dot_4_min
us_java.lib.logstash.outputs.elasticsearch.http_client.pool.with_connection(/usr/share/logstash/vendor/bundle/jruby/2.6.0/gems/logstash-o
utput-elasticsearch-11.12.4-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:398) ~[?:?]
        at usr.share.logstash.vendor.bundle.jruby.$2_dot_6_dot_0.gems.logstash_minus_output_minus_elasticsearch_minus_11_dot_12_dot_4_min
us_java.lib.logstash.outputs.elasticsearch.http_client.pool.perform_request(/usr/share/logstash/vendor/bundle/jruby/2.6.0/gems/logstash-o
utput-elasticsearch-11.12.4-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:310) ~[?:?]
        at usr.share.logstash.vendor.bundle.jruby.$2_dot_6_dot_0.gems.logstash_minus_output_minus_elasticsearch_minus_11_dot_12_dot_4_min
us_java.lib.logstash.outputs.elasticsearch.http_client.pool.Pool(/usr/share/logstash/vendor/bundle/jruby/2.6.0/gems/logstash-output-elast
icsearch-11.12.4-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:318) ~[?:?]
        at org.jruby.RubyProc.call(org/jruby/RubyProc.java:330) ~[jruby.jar:?]
        at usr.share.logstash.vendor.bundle.jruby.$2_dot_6_dot_0.gems.logstash_minus_output_minus_elasticsearch_minus_11_dot_12_dot_4_min
us_java.lib.logstash.outputs.elasticsearch.http_client.bulk_send(/usr/share/logstash/vendor/bundle/jruby/2.6.0/gems/logstash-output-elast
icsearch-11.12.4-java/lib/logstash/outputs/elasticsearch/http_client.rb:176) ~[?:?]
        at usr.share.logstash.vendor.bundle.jruby.$2_dot_6_dot_0.gems.logstash_minus_output_minus_elasticsearch_minus_11_dot_12_dot_4_min
us_java.lib.logstash.outputs.elasticsearch.http_client.bulk(/usr/share/logstash/vendor/bundle/jruby/2.6.0/gems/logstash-output-elasticsea
rch-11.12.4-java/lib/logstash/outputs/elasticsearch/http_client.rb:154) ~[?:?]
        at usr.share.logstash.vendor.bundle.jruby.$2_dot_6_dot_0.gems.logstash_minus_output_minus_elasticsearch_minus_11_dot_12_dot_4_min
us_java.lib.logstash.plugin_mixins.elasticsearch.common.safe_bulk(/usr/share/logstash/vendor/bundle/jruby/2.6.0/gems/logstash-output-elas
ticsearch-11.12.4-java/lib/logstash/plugin_mixins/elasticsearch/common.rb:312) ~[?:?]
        at usr.share.logstash.vendor.bundle.jruby.$2_dot_6_dot_0.gems.logstash_minus_output_minus_elasticsearch_minus_11_dot_12_dot_4_min
us_java.lib.logstash.plugin_mixins.elasticsearch.common.submit(/usr/share/logstash/vendor/bundle/jruby/2.6.0/gems/logstash-output-elastic
search-11.12.4-java/lib/logstash/plugin_mixins/elasticsearch/common.rb:245) ~[?:?]
        at usr.share.logstash.vendor.bundle.jruby.$2_dot_6_dot_0.gems.logstash_minus_output_minus_elasticsearch_minus_11_dot_12_dot_4_min
us_java.lib.logstash.plugin_mixins.elasticsearch.common.retrying_submit(/usr/share/logstash/vendor/bundle/jruby/2.6.0/gems/logstash-outpu
t-elasticsearch-11.12.4-java/lib/logstash/plugin_mixins/elasticsearch/common.rb:190) ~[?:?]
        at usr.share.logstash.vendor.bundle.jruby.$2_dot_6_dot_0.gems.logstash_minus_output_minus_elasticsearch_minus_11_dot_12_dot_4_min
us_java.lib.logstash.outputs.elasticsearch.multi_receive(/usr/share/logstash/vendor/bundle/jruby/2.6.0/gems/logstash-output-elasticsearch
-11.12.4-java/lib/logstash/outputs/elasticsearch.rb:382) ~[?:?]
        at org.logstash.config.ir.compiler.OutputStrategyExt$AbstractOutputStrategyExt.invokeOutput(org/logstash/config/ir/compiler/Outpu
tStrategyExt.java:153) ~[logstash-core.jar:?]
        at org.logstash.config.ir.compiler.OutputStrategyExt$SimpleAbstractOutputStrategyExt.doOutput(org/logstash/config/ir/compiler/Out
putStrategyExt.java:279) ~[logstash-core.jar:?]
        at org.logstash.config.ir.compiler.OutputStrategyExt$SharedOutputStrategyExt.output(org/logstash/config/ir/compiler/OutputStrateg
yExt.java:312) ~[logstash-core.jar:?]
        at org.logstash.config.ir.compiler.OutputStrategyExt$AbstractOutputStrategyExt.multiReceive(org/logstash/config/ir/compiler/Outpu
tStrategyExt.java:143) ~[logstash-core.jar:?]
        at org.logstash.config.ir.compiler.OutputDelegatorExt.doOutput(org/logstash/config/ir/compiler/OutputDelegatorExt.java:102) ~[log
stash-core.jar:?]
        at org.logstash.config.ir.compiler.AbstractOutputDelegatorExt.multi_receive(org/logstash/config/ir/compiler/AbstractOutputDelegat
orExt.java:121) ~[logstash-core.jar:?]
        at org.logstash.generated.CompiledDataset6.compute(org/logstash/generated/CompiledDataset6) ~[?:?]
        at org.logstash.generated.CompiledDataset7.compute(org/logstash/generated/CompiledDataset7) ~[?:?]
        at org.logstash.config.ir.CompiledPipeline$CompiledUnorderedExecution.compute(org/logstash/config/ir/CompiledPipeline.java:351) ~
[logstash-core.jar:?]
        at org.logstash.config.ir.CompiledPipeline$CompiledUnorderedExecution.compute(org/logstash/config/ir/CompiledPipeline.java:341) ~
[logstash-core.jar:?]
        at org.logstash.execution.WorkerLoop.run(org/logstash/execution/WorkerLoop.java:87) ~[logstash-core.jar:?]
        at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:?]
        at jdk.internal.reflect.NativeMethodAccessorImpl.invoke(jdk/internal/reflect/NativeMethodAccessorImpl.java:77) ~[?:?]
        at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(jdk/internal/reflect/DelegatingMethodAccessorImpl.java:43) ~[?:?]
        at java.lang.reflect.Method.invoke(java/lang/reflect/Method.java:568) ~[?:?]
        at org.jruby.javasupport.JavaMethod.invokeDirectWithExceptionHandling(org/jruby/javasupport/JavaMethod.java:442) ~[jruby.jar:?]
        at org.jruby.javasupport.JavaMethod.invokeDirect(org/jruby/javasupport/JavaMethod.java:306) ~[jruby.jar:?]
        at RUBY.start_workers(/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:301) ~[?:?]
        at org.jruby.RubyProc.call(org/jruby/RubyProc.java:309) ~[jruby.jar:?]
        at java.lang.Thread.run(java/lang/Thread.java:833) [?:?]
[2025-06-25T19:14:56,529][WARN ][logstash.outputs.elasticsearch][main][cf8bbb5d374a47085c47bba00722e4720911402308b1364c802249d8378925df]
Marking url as dead. Last error: [LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError] Elasticsearch Unreachable: [h
ttp://xx.xx.xx.xx:9200/_bulk][Manticore::SocketException] Connection reset {:url=>http://xxxx:xxxxxx@xx.xx.xx.xx:9200/, :err
or_message=>"Elasticsearch Unreachable: [http://xx.xx.xx.xx:9200/_bulk][Manticore::SocketException] Connection reset", :error_class=>"Lo
gStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError"}
[2025-06-25T19:14:56,529][ERROR][logstash.outputs.elasticsearch][main][cf8bbb5d374a47085c47bba00722e4720911402308b1364c802249d8378925df]
Attempted to send a bulk request but Elasticsearch appears to be unreachable or down {:message=>"Elasticsearch Unreachable: [http://xx.xx.xx.xx:9200/_bulk][Manticore::SocketException] Connection reset", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUn
reachableError, :will_retry_in_seconds=>4}

Subsequently, a NoConnectionAvailableError was logged, indicating that no available connection to Elasticsearch could be established. During this period, Logstash continued to queue incoming events.
➤ NoConnectionAvailableError Logged

[2025-06-25T19:15:10,640][DEBUG][org.logstash.beats.ConnectionHandler][main][a6f469f09c22436fc8bcbce2e7012af6799ae969a3172d936c537126600f
98cf] becbe60f: batches pending: true
[2025-06-25T19:15:10,640][DEBUG][org.logstash.beats.BeatsHandler][main][a6f469f09c22436fc8bcbce2e7012af6799ae969a3172d936c537126600f98cf]
 [local: xx.xx.xx.xx:5044, remote: xx.xx.xx.xx:36986] Received a new payload
[2025-06-25T19:15:10,640][DEBUG][org.logstash.beats.BeatsHandler][main][a6f469f09c22436fc8bcbce2e7012af6799ae969a3172d936c537126600f98cf]
 [local: xx.xx.xx.xx:5044, remote: xx.xx.xx.xx:36986] Sending a new message for the listener, sequence: 1
[2025-06-25T19:15:10,640][DEBUG][org.logstash.beats.BeatsHandler][main][a6f469f09c22436fc8bcbce2e7012af6799ae969a3172d936c537126600f98cf]
 [local: xx.xx.xx.xx:5044, remote: xx.xx.xx.xx:36986] Sending a new message for the listener, sequence: 2
[2025-06-25T19:15:10,641][DEBUG][org.logstash.beats.BeatsHandler][main][a6f469f09c22436fc8bcbce2e7012af6799ae969a3172d936c537126600f98cf]
 [local: xx.xx.xx.xx:5044, remote: xx.xx.xx.xx:36986] Sending a new message for the listener, sequence: 3
[2025-06-25T19:15:10,641][DEBUG][org.logstash.beats.BeatsHandler][main][a6f469f09c22436fc8bcbce2e7012af6799ae969a3172d936c537126600f98cf]
 [local: xx.xx.xx.xx:5044, remote: xx.xx.xx.xx:36986] Sending a new message for the listener, sequence: 4
[2025-06-25T19:15:10,641][DEBUG][org.logstash.beats.BeatsHandler][main][a6f469f09c22436fc8bcbce2e7012af6799ae969a3172d936c537126600f98cf]
 becbe60f: batches pending: false
[2025-06-25T19:15:10,649][DEBUG][logstash.outputs.elasticsearch][main][cf8bbb5d374a47085c47bba00722e4720911402308b1364c802249d8378925df]
Sending final bulk request for batch. {:action_count=>77, :payload_size=>245549, :content_length=>245549, :batch_offset=>0}
[2025-06-25T19:15:10,650][ERROR][logstash.outputs.elasticsearch][main][cf8bbb5d374a47085c47bba00722e4720911402308b1364c802249d8378925df]
Attempted to send a bulk request but there are no living connections in the pool (perhaps Elasticsearch is unreachable or down?) {:messag
e=>"No Available connections", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError, :will_retry_in
_seconds=>4}
[2025-06-25T19:15:10,676][DEBUG][org.logstash.beats.ConnectionHandler][main][a6f469f09c22436fc8bcbce2e7012af6799ae969a3172d936c537126600f
98cf] 30fc3f18: batches pending: true
[2025-06-25T19:15:10,676][DEBUG][org.logstash.beats.BeatsHandler][main][a6f469f09c22436fc8bcbce2e7012af6799ae969a3172d936c537126600f98cf]
 [local: xx.xx.xx.xx:5044, remote: xx.xx.xx.xx:60974] Received a new payload
[2025-06-25T19:15:10,676][DEBUG][org.logstash.beats.BeatsHandler][main][a6f469f09c22436fc8bcbce2e7012af6799ae969a3172d936c537126600f98cf]
 [local: xx.xx.xx.xx:5044, remote: xx.xx.xx.xx:60974] Sending a new message for the listener, sequence: 1
[2025-06-25T19:15:10,677][DEBUG][org.logstash.beats.BeatsHandler][main][a6f469f09c22436fc8bcbce2e7012af6799ae969a3172d936c537126600f98cf]
 [local: xx.xx.xx.xx:5044, remote: xx.xx.xx.xx:60974] Sending a new message for the listener, sequence: 2
[2025-06-25T19:15:10,677][DEBUG][org.logstash.beats.BeatsHandler][main][a6f469f09c22436fc8bcbce2e7012af6799ae969a3172d936c537126600f98cf]
 30fc3f18: batches pending: false
[2025-06-25T19:15:10,685][DEBUG][org.logstash.beats.ConnectionHandler][main][a6f469f09c22436fc8bcbce2e7012af6799ae969a3172d936c537126600f
98cf] c466035c: batches pending: true
[2025-06-25T19:15:10,685][DEBUG][org.logstash.beats.BeatsHandler][main][a6f469f09c22436fc8bcbce2e7012af6799ae969a3172d936c537126600f98cf]
 [local: xx.xx.xx.xx:5044, remote: xx.xx.xx.xx:39984] Received a new payload
[2025-06-25T19:15:10,685][DEBUG][org.logstash.beats.BeatsHandler][main][a6f469f09c22436fc8bcbce2e7012af6799ae969a3172d936c537126600f98cf]
 [local: xx.xx.xx.xx:5044, remote: xx.xx.xx.xx:39984] Sending a new message for the listener, sequence: 1
[2025-06-25T19:15:10,685][DEBUG][org.logstash.beats.BeatsHandler][main][a6f469f09c22436fc8bcbce2e7012af6799ae969a3172d936c537126600f98cf]
 [local: xx.xx.xx.xx:5044, remote: xx.xx.xx.xx:39984] Sending a new message for the listener, sequence: 2
[2025-06-25T19:15:10,685][DEBUG][org.logstash.beats.BeatsHandler][main][a6f469f09c22436fc8bcbce2e7012af6799ae969a3172d936c537126600f98cf]
 [local: xx.xx.xx.xx:5044, remote: xx.xx.xx.xx:39984] Sending a new message for the listener, sequence: 3
[2025-06-25T19:15:10,685][DEBUG][org.logstash.beats.BeatsHandler][main][a6f469f09c22436fc8bcbce2e7012af6799ae969a3172d936c537126600f98cf]
 [local: xx.xx.xx.xx:5044, remote: xx.xx.xx.xx:39984] Sending a new message for the listener, sequence: 4
[2025-06-25T19:15:10,685][DEBUG][org.logstash.beats.BeatsHandler][main][a6f469f09c22436fc8bcbce2e7012af6799ae969a3172d936c537126600f98cf]
 [local: xx.xx.xx.xx:5044, remote: xx.xx.xx.xx:39984] Sending a new message for the listener, sequence: 5
[2025-06-25T19:15:10,686][DEBUG][org.logstash.beats.BeatsHandler][main][a6f469f09c22436fc8bcbce2e7012af6799ae969a3172d936c537126600f98cf]
 [local: xx.xx.xx.xx:5044, remote: xx.xx.xx.xx:39984] Sending a new message for the listener, sequence: 6
[2025-06-25T19:15:10,686][DEBUG][org.logstash.beats.BeatsHandler][main][a6f469f09c22436fc8bcbce2e7012af6799ae969a3172d936c537126600f98cf]
 c466035c: batches pending: false
[2025-06-25T19:15:10,801][DEBUG][org.logstash.beats.ConnectionHandler][main][a6f469f09c22436fc8bcbce2e7012af6799ae969a3172d936c537126600f
98cf] c3d743ab: batches pending: true
[2025-06-25T19:15:10,801][DEBUG][org.logstash.beats.ConnectionHandler][main][a6f469f09c22436fc8bcbce2e7012af6799ae969a3172d936c537126600f
98cf] c3d743ab: batches pending: true
[2025-06-25T19:15:10,803][DEBUG][org.logstash.beats.ConnectionHandler][main][a6f469f09c22436fc8bcbce2e7012af6799ae969a3172d936c537126600f
98cf] c3d743ab: batches pending: true
[2025-06-25T19:15:10,804][DEBUG][org.logstash.beats.BeatsHandler][main][a6f469f09c22436fc8bcbce2e7012af6799ae969a3172d936c537126600f98cf]
 [local: xx.xx.xx.xx:5044, remote: xx.xx.xx.xx:45014] Received a new payload
[2025-06-25T19:15:10,805][DEBUG][org.logstash.beats.BeatsHandler][main][a6f469f09c22436fc8bcbce2e7012af6799ae969a3172d936c537126600f98cf]
 [local: xx.xx.xx.xx:5044, remote: xx.xx.xx.xx:45014] Sending a new message for the listener, sequence: 1
[2025-06-25T19:15:10,805][DEBUG][org.logstash.beats.BeatsHandler][main][a6f469f09c22436fc8bcbce2e7012af6799ae969a3172d936c537126600f98cf]
 [local: xx.xx.xx.xx:5044, remote: xx.xx.xx.xx:45014] Sending a new message for the listener, sequence: 2
[2025-06-25T19:15:10,805][DEBUG][org.logstash.beats.BeatsHandler][main][a6f469f09c22436fc8bcbce2e7012af6799ae969a3172d936c537126600f98cf]
 [local: xx.xx.xx.xx:5044, remote: xx.xx.xx.xx:45014] Sending a new message for the listener, sequence: 3
[2025-06-25T19:15:10,805][DEBUG][org.logstash.beats.BeatsHandler][main][a6f469f09c22436fc8bcbce2e7012af6799ae969a3172d936c537126600f98cf]
 [local: xx.xx.xx.xx:5044, remote: xx.xx.xx.xx:45014] Sending a new message for the listener, sequence: 4
[2025-06-25T19:15:10,805][DEBUG][org.logstash.beats.BeatsHandler][main][a6f469f09c22436fc8bcbce2e7012af6799ae969a3172d936c537126600f98cf]
 [local: xx.xx.xx.xx:5044, remote: xx.xx.xx.xx:45014] Sending a new message for the listener, sequence: 5
[2025-06-25T19:15:10,805][DEBUG][org.logstash.beats.BeatsHandler][main][a6f469f09c22436fc8bcbce2e7012af6799ae969a3172d936c537126600f98cf]
 [local: xx.xx.xx.xx:5044, remote: xx.xx.xx.xx:45014] Sending a new message for the listener, sequence: 6
[2025-06-25T19:15:10,805][DEBUG][org.logstash.beats.BeatsHandler][main][a6f469f09c22436fc8bcbce2e7012af6799ae969a3172d936c537126600f98cf]
 [local: xx.xx.xx.xx:5044, remote: xx.xx.xx.xx:45014] Sending a new message for the listener, sequence: 7
[2025-06-25T19:15:10,806][DEBUG][org.logstash.beats.BeatsHandler][main][a6f469f09c22436fc8bcbce2e7012af6799ae969a3172d936c537126600f98cf]
 [local: xx.xx.xx.xx:5044, remote: xx.xx.xx.xx:45014] Sending a new message for the listener, sequence: 8
[2025-06-25T19:15:10,806][DEBUG][org.logstash.beats.BeatsHandler][main][a6f469f09c22436fc8bcbce2e7012af6799ae969a3172d936c537126600f98cf]
 [local: xx.xx.xx.xx:5044, remote: xx.xx.xx.xx:45014] Sending a new message for the listener, sequence: 9
[2025-06-25T19:15:10,806][DEBUG][org.logstash.beats.BeatsHandler][main][a6f469f09c22436fc8bcbce2e7012af6799ae969a3172d936c537126600f98cf]
 [local: xx.xx.xx.xx:5044, remote: xx.xx.xx.xx:45014] Sending a new message for the listener, sequence: 10
[2025-06-25T19:15:10,806][DEBUG][org.logstash.beats.BeatsHandler][main][a6f469f09c22436fc8bcbce2e7012af6799ae969a3172d936c537126600f98cf]
 [local: xx.xx.xx.xx:5044, remote: xx.xx.xx.xx:45014] Sending a new message for the listener, sequence: 11
[2025-06-25T19:15:10,806][DEBUG][org.logstash.beats.BeatsHandler][main][a6f469f09c22436fc8bcbce2e7012af6799ae969a3172d936c537126600f98cf]
 [local: xx.xx.xx.xx:5044, remote: xx.xx.xx.xx:45014] Sending a new message for the listener, sequence: 12
[2025-06-25T19:15:10,806][DEBUG][org.logstash.beats.BeatsHandler][main][a6f469f09c22436fc8bcbce2e7012af6799ae969a3172d936c537126600f98cf]
 [local: xx.xx.xx.xx:5044, remote: xx.xx.xx.xx:45014] Sending a new message for the listener, sequence: 13
[2025-06-25T19:15:10,806][DEBUG][org.logstash.beats.BeatsHandler][main][a6f469f09c22436fc8bcbce2e7012af6799ae969a3172d936c537126600f98cf]
 [local: xx.xx.xx.xx:5044, remote: xx.xx.xx.xx:45014] Sending a new message for the listener, sequence: 14
[2025-06-25T19:15:10,806][DEBUG][org.logstash.beats.BeatsHandler][main][a6f469f09c22436fc8bcbce2e7012af6799ae969a3172d936c537126600f98cf]
 [local: xx.xx.xx.xx:5044, remote: xx.xx.xx.xx:45014] Sending a new message for the listener, sequence: 15
[2025-06-25T19:15:10,806][DEBUG][org.logstash.beats.BeatsHandler][main][a6f469f09c22436fc8bcbce2e7012af6799ae969a3172d936c537126600f98cf]
----------------------------------------------
2025-06-25T19:15:10,843][DEBUG][logstash.outputs.elasticsearch][main] Running health check to see if an ES connection is working {:url=>
"http://logstash_writer:xxxxxx@xx.xx.xx.xx:9200/", :path=>"/"}
[2025-06-25T19:15:10,847][WARN ][logstash.outputs.elasticsearch][main] Restored connection to ES instance {:url=>"http://logstash_writer:
xxxxxx@xx.xx.xx.xx:9200/"}
[2025-06-25T19:15:10,850][DEBUG][logstash.outputs.elasticsearch][main] Running health check to see if an ES connection is working {:url=>
"http://logstash_writer:xxxxxx@xx.xx.xx.xx:9200/", :path=>"/"}
[2025-06-25T19:15:10,850][DEBUG][org.logstash.beats.BeatsHandler][main]
3b174d38: batches pending: false
[2025-06-25T19:15:10,951][DEBUG][org.logstash.beats.ConnectionHandler][main][a6f469f09c22436fc8bcbce2e7012af6799ae969a3172d936c537126600f
98cf] c6273844: batches pending: true
[2025-06-25T19:15:10,951][DEBUG][org.logstash.beats.BeatsHandler][main][a6f469f09c22436fc8bcbce2e7012af6799ae969a3172d936c537126600f98cf]
 [local: xx.xx.xx.xx:5044, remote: xx.xx.xx.xx:37926] Received a new payload
[2025-06-25T19:15:10,951][DEBUG][org.logstash.beats.BeatsHandler][main][a6f469f09c22436fc8bcbce2e7012af6799ae969a3172d936c537126600f98cf]
 [local: xx.xx.xx.xx:5044, remote: xx.xx.xx.xx:37926] Sending a new message for the listener, sequence: 1
[2025-06-25T19:15:10,952][DEBUG][org.logstash.beats.BeatsHandler][main][a6f469f09c22436fc8bcbce2e7012af6799ae969a3172d936c537126600f98cf]
 c6273844: batches pending: false
[2025-06-25T19:15:10,988][DEBUG][org.logstash.beats.ConnectionHandler][main][a6f469f09c22436fc8bcbce2e7012af6799ae969a3172d936c537126600f
98cf] a1b9de25: batches pending: true
[2025-06-25T19:15:10,988][DEBUG][org.logstash.beats.ConnectionHandler][main][a6f469f09c22436fc8bcbce2e7012af6799ae969a3172d936c537126600f
98cf] a1b9de25: batches pending: true
[2025-06-25T19:15:10,988][DEBUG][org.logstash.beats.BeatsHandler][main][a6f469f09c22436fc8bcbce2e7012af6799ae969a3172d936c537126600f98cf]
 [local: xx.xx.xx.xx:5044, remote: xx.xx.xx.xx:36528] Received a new payload
[2025-06-25T19:15:10,988][DEBUG][org.logstash.beats.BeatsHandler][main][a6f469f09c22436fc8bcbce2e7012af6799ae969a3172d936c537126600f98cf]
 [local: xx.xx.xx.xx:5044, remote: xx.xx.xx.xx:36528] Sending a new message for the listener, sequence: 1
[2025-06-25T19:15:10,988][DEBUG][org.logstash.beats.BeatsHandler][main][a6f469f09c22436fc8bcbce2e7012af6799ae969a3172d936c537126600f98cf]
 [local: xx.xx.xx.xx:5044, remote: xx.xx.xx.xx:36528] Sending a new message for the listener, sequence: 2
[2025-06-25T19:15:10,989][DEBUG][org.logstash.beats.BeatsHandler][main][a6f469f09c22436fc8bcbce2e7012af6799ae969a3172d936c537126600f98cf]
 [local: xx.xx.xx.xx:5044, remote: xx.xx.xx.xx:36528] Sending a new message for the listener, sequence: 3
[2025-06-25T19:15:10,989][DEBUG][org.logstash.beats.BeatsHandler][main][a6f469f09c22436fc8bcbce2e7012af6799ae969a3172d936c537126600f98cf]
 [local: xx.xx.xx.xx:5044, remote: xx.xx.xx.xx:36528] Sending a new message for the listener, sequence: 4
[2025-06-25T19:15:10,989][DEBUG][org.logstash.beats.BeatsHandler][main][a6f469f09c22436fc8bcbce2e7012af6799ae969a3172d936c537126600f98cf]
 [local: xx.xx.xx.xx:5044, remote: xx.xx.xx.xx:36528] Sending a new message for the listener, sequence: 5
[2025-06-25T19:15:10,989][DEBUG][org.logstash.beats.BeatsHandler][main][a6f469f09c22436fc8bcbce2e7012af6799ae969a3172d936c537126600f98cf]
 [local: xx.xx.xx.xx:5044, remote: xx.xx.xx.xx:36528] Sending a new message for the listener, sequence: 6
[2025-06-25T19:15:10,989][DEBUG][org.logstash.beats.BeatsHandler][main][a6f469f09c22436fc8bcbce2e7012af6799ae969a3172d936c537126600f98cf]
 [local: xx.xx.xx.xx:5044, remote: xx.xx.xx.xx:36528] Sending a new message for the listener, sequence: 7
[2025-06-25T19:15:10,990][DEBUG][org.logstash.beats.BeatsHandler][main][a6f469f09c22436fc8bcbce2e7012af6799ae969a3172d936c537126600f98cf]
 [local: xx.xx.xx.xx:5044, remote: xx.xx.xx.xx:36528] Sending a new message for the listener, sequence: 8
[2025-06-25T19:15:10,990][DEBUG][org.logstash.beats.BeatsHandler][main][a6f469f09c22436fc8bcbce2e7012af6799ae969a3172d936c537126600f98cf]
 [local: xx.xx.xx.xx:5044, remote: xx.xx.xx.xx:36528] Sending a new message for the listener, sequence: 9
[2025-06-25T19:15:10,990][DEBUG][org.logstash.beats.BeatsHandler][main][a6f469f09c22436fc8bcbce2e7012af6799ae969a3172d936c537126600f98cf]
 [local: xx.xx.xx.xx:5044, remote: xx.xx.xx.xx:36528] Sending a new message for the listener, sequence: 10
[2025-06-25T19:15:10,990][DEBUG][org.logstash.beats.BeatsHandler][main][a6f469f09c22436fc8bcbce2e7012af6799ae969a3172d936c537126600f98cf]
 [local: xx.xx.xx.xx:5044, remote: xx.xx.xx.xx:36528] Sending a new message for the listener, sequence: 11
[2025-06-25T19:15:10,990][DEBUG][org.logstash.beats.BeatsHandler][main][a6f469f09c22436fc8bcbce2e7012af6799ae969a3172d936c537126600f98cf]
 [local: xx.xx.xx.xx:5044, remote: xx.xx.xx.xx:36528] Sending a new message for the listener, sequence: 12
[2025-06-25T19:15:10,990][DEBUG][org.logstash.beats.BeatsHandler][main][a6f469f09c22436fc8bcbce2e7012af6799ae969a3172d936c537126600f98cf]
 a1b9de25: batches pending: false
[2025-06-25T19:15:11,098][DEBUG][org.logstash.beats.ConnectionHandler][main][a6f469f09c22436fc8bcbce2e7012af6799ae969a3172d936c537126600f
98cf] 388dbc2d: batches pending: true
[2025-06-25T19:15:11,099][DEBUG][org.logstash.beats.BeatsHandler][main][a6f469f09c22436fc8bcbce2e7012af6799ae969a3172d936c537126600f98cf]
 [local: xx.xx.xx.xx:5044, remote: xx.xx.xx.xx:47822] Received a new payload
[2025-06-25T19:15:11,099][DEBUG][org.logstash.beats.BeatsHandler][main][a6f469f09c22436fc8bcbce2e7012af6799ae969a3172d936c537126600f98cf]
 [local: xx.xx.xx.xx:5044, remote: xx.xx.xx.xx:47822] Sending a new message for the listener, sequence: 1
[2025-06-25T19:15:11,099][DEBUG][org.logstash.beats.BeatsHandler][main][a6f469f09c22436fc8bcbce2e7012af6799ae969a3172d936c537126600f98cf]
 [local: xx.xx.xx.xx:5044, remote: xx.xx.xx.xx:47822] Sending a new message for the listener, sequence: 2
[2025-06-25T19:15:11,099][DEBUG][org.logstash.beats.BeatsHandler][main][a6f469f09c22436fc8bcbce2e7012af6799ae969a3172d936c537126600f98cf]
 [local: xx.xx.xx.xx:5044, remote: xx.xx.xx.xx:47822] Sending a new message for the listener, sequence: 3
[2025-06-25T19:15:11,099][DEBUG][org.logstash.beats.BeatsHandler][main][a6f469f09c22436fc8bcbce2e7012af6799ae969a3172d936c537126600f98cf]
 [local: xx.xx.xx.xx:5044, remote: xx.xx.xx.xx:47822] Sending a new message for the listener, sequence: 4
[2025-06-25T19:15:11,099][DEBUG][org.logstash.beats.BeatsHandler][main][a6f469f09c22436fc8bcbce2e7012af6799ae969a3172d936c537126600f98cf]
 [local: xx.xx.xx.xx:5044, remote: xx.xx.xx.xx:47822] Sending a new message for the listener, sequence: 5
[2025-06-25T19:15:11,100][DEBUG][org.logstash.beats.BeatsHandler][main][a6f469f09c22436fc8bcbce2e7012af6799ae969a3172d936c537126600f98cf]
 [local: xx.xx.xx.xx:5044, remote: xx.xx.xx.xx:47822] Sending a new message for the listener, sequence: 6
[2025-06-25T19:15:11,100][DEBUG][org.logstash.beats.BeatsHandler][main][a6f469f09c22436fc8bcbce2e7012af6799ae969a3172d936c537126600f98cf]
 388dbc2d: batches pending: false
[2025-06-25T19:15:11,180][DEBUG][org.logstash.beats.ConnectionHandler][main][a6f469f09c22436fc8bcbce2e7012af6799ae969a3172d936c537126600f
98cf] 8d811ec0: batches pending: true
[2025-06-25T19:15:11,180][DEBUG][org.logstash.beats.ConnectionHandler][main][a6f469f09c22436fc8bcbce2e7012af6799ae969a3172d936c537126600f
98cf] 8d811ec0: batches pending: true
[2025-06-25T19:15:11,180][DEBUG][org.logstash.beats.BeatsHandler][main][a6f469f09c22436fc8bcbce2e7012af6799ae969a3172d936c537126600f98cf]
 [local: xx.xx.xx.xx:5044, remote: xx.xx.xx.xx:55140] Received a new payload
[2025-06-25T19:15:11,180][DEBUG][org.logstash.beats.BeatsHandler][main][a6f469f09c22436fc8bcbce2e7012af6799ae969a3172d936c537126600f98cf]
 [local: xx.xx.xx.xx:5044, remote: xx.xx.xx.xx:55140] Sending a new message for the listener, sequence: 1
[2025-06-25T19:15:11,181][DEBUG][org.logstash.beats.BeatsHandler][main][a6f469f09c22436fc8bcbce2e7012af6799ae969a3172d936c537126600f98cf]
 [local: xx.xx.xx.xx:5044, remote: xx.xx.xx.xx:55140] Sending a new message for the listener, sequence: 2
[2025-06-25T19:15:11,181][DEBUG][org.logstash.beats.BeatsHandler][main][a6f469f09c22436fc8bcbce2e7012af6799ae969a3172d936c537126600f98cf]
 [local: xx.xx.xx.xx:5044, remote: xx.xx.xx.xx:55140] Sending a new message for the listener, sequence: 3
[2025-06-25T19:15:11,181][DEBUG][org.logstash.beats.BeatsHandler][main][a6f469f09c22436fc8bcbce2e7012af6799ae969a3172d936c537126600f98cf]
 [local: xx.xx.xx.xx:5044, remote: xx.xx.xx.xx:55140] Sending a new message for the listener, sequence: 4
[2025-06-25T19:15:11,181][DEBUG][org.logstash.beats.BeatsHandler][main][a6f469f09c22436fc8bcbce2e7012af6799ae969a3172d936c537126600f98cf]
 [local: xx.xx.xx.xx:5044, remote: xx.xx.xx.xx:55140] Sending a new message for the listener, sequence: 5
[2025-06-25T19:15:11,181][DEBUG][org.logstash.beats.BeatsHandler][main][a6f469f09c22436fc8bcbce2e7012af6799ae969a3172d936c537126600f98cf]
 [local: xx.xx.xx.xx:5044, remote: xx.xx.xx.xx:55140] Sending a new message for the listener, sequence: 6
[2025-06-25T19:15:11,181][DEBUG][org.logstash.beats.BeatsHandler][main][a6f469f09c22436fc8bcbce2e7012af6799ae969a3172d936c537126600f98cf]
 [local: xx.xx.xx.xx:5044, remote: xx.xx.xx.xx:55140] Sending a new message for the listener, sequence: 7
[2025-06-25T19:15:11,181][DEBUG][org.logstash.beats.BeatsHandler][main][a6f469f09c22436fc8bcbce2e7012af6799ae969a3172d936c537126600f98cf]
 [local: xx.xx.xx.xx:5044, remote: xx.xx.xx.xx:55140] Sending a new message for the listener, sequence: 8
[2025-06-25T19:15:11,181][DEBUG][org.logstash.beats.BeatsHandler][main][a6f469f09c22436fc8bcbce2e7012af6799ae969a3172d936c537126600f98cf]
 [local: xx.xx.xx.xx:5044, remote: xx.xx.xx.xx:55140] Sending a new message for the listener, sequence: 9
[2025-06-25T19:15:11,181][DEBUG][org.logstash.beats.BeatsHandler][main][a6f469f09c22436fc8bcbce2e7012af6799ae969a3172d936c537126600f98cf]
 [local: xx.xx.xx.xx:5044, remote: xx.xx.xx.xx:55140] Sending a new message for the listener, sequence: 10
[2025-06-25T19:15:11,181][DEBUG][org.logstash.beats.BeatsHandler][main][a6f469f09c22436fc8bcbce2e7012af6799ae969a3172d936c537126600f98cf]
 [local: xx.xx.xx.xx:5044, remote: xx.xx.xx.xx:55140] Sending a new message for the listener, sequence: 11
[2025-06-25T19:15:11,182][DEBUG][org.logstash.beats.BeatsHandler][main][a6f469f09c22436fc8bcbce2e7012af6799ae969a3172d936c537126600f98cf]
 [local: xx.xx.xx.xx:5044, remote: xx.xx.xx.xx:55140] Sending a new message for the listener, sequence: 12
[2025-06-25T19:15:11,182][DEBUG][org.logstash.beats.BeatsHandler][main][a6f469f09c22436fc8bcbce2e7012af6799ae969a3172d936c537126600f98cf]
 [local: xx.xx.xx.xx:5044, remote: xx.xx.xx.xx:55140] Sending a new message for the listener, sequence: 13
[2025-06-25T19:15:11,182][DEBUG][org.logstash.beats.BeatsHandler][main][a6f469f09c22436fc8bcbce2e7012af6799ae969a3172d936c537126600f98cf]

Logstash initiated multiple attempts to restore the connection. However, each publishing attempt resulted in a connection reset before the request could be successfully completed.

Logstash was unable to resume data transmission to Elasticsearch, leading to a sustained halt in document processing.