Output isolator pattern, not working as expected when one pipeline is down

All the outputs stop when one goes down here, i am trying to implement the output isolator pattern.
In the below configuration when hosts => ["ext-se:9200"] goes down the other host "es-host1" { hosts => ["eric-data-search-engine:9200"] } doesn't receive events

apiVersion: v1
kind: ConfigMap
metadata:
  name: logstash-configmap
  namespace: udp-test-poc
data:
  logstash.yml: |
    http.host: "0.0.0.0"
  pipelines.yml: |
    - pipeline.id: intake
      queue.type: persisted
      queue.max_bytes: 1gb
      path.config: "/usr/share/logstash/config/logstash.conf"
    - pipeline.id: buffered-es-host1
      queue.type: persisted
      queue.max_bytes: 1gb
      path.config: "/usr/share/logstash/config/es1.conf"
    - pipeline.id: buffered-es-host2
      queue.type: persisted
      queue.max_bytes: 1gb
      path.config: "/usr/share/logstash/config/es2.conf"
  logstash.conf: |
    input { 
      beats { 
        port => 5044 
      } 
    }
    output { 
      pipeline { 
        send_to => ["es-host1", "es-host2"] 
      } 
    }    
  es1.conf: |        
    input { 
      pipeline { 
        address => "es-host1" 
      } 
    }
    output {
      elasticsearch {
        ilm_enabled => false
        hosts => ["eric-data-search-engine:9200"]
        user => 'logstash'
        password => '${LOGSTASH_PW}'
        index => "logstash-beta-%{+YYYY.MM.dd}"
      }
    }
    
  es2.conf: |   
    input { 
      pipeline { 
        address => "es-host2"
      } 
    }     
    output {
      elasticsearch {
        ilm_enabled => false
        hosts => ["ext-se:9200"]
        user => 'logstash'
        password => '${LOGSTASH_PW}'
        index => "logstash-beta-%{+YYYY.MM.dd}"
      }
    }

Below is my deployments.


apiVersion: apps/v1
kind: Deployment
metadata:
  name: logstash-deployment
  namespace: udp-test-poc
spec:
  replicas: 1
  selector:
    matchLabels:
      app: logstash
  template:
    metadata:
      labels:
        app: logstash
    spec:
      containers:
      - name: logstash
        env:
          - name: LOGSTASH_PW
        image: docker.elastic.co/logstash/logstash-oss:7.7.1
        ports:
        - containerPort: 5044
        volumeMounts:
          - name: config-volume
            mountPath: /usr/share/logstash/config
        resources:
            limits:
              memory: "4Gi"
              cpu: "2500m"
            requests: 
              memory: "4Gi"
              cpu: "800m"
      volumes:
      - name: config-volume
        configMap:
          name: logstash-configmap
          items:
            - key: logstash.yml
              path: logstash.yml
            - key: pipelines.yml
              path: pipelines.yml
            - key: logstash.conf
              path: logstash.conf
            - key: es1.conf
              path: es1.conf
            - key: es2.conf
              path: es2.conf

Does it stop immediately or it take some time to stop?

It will stop if any of the persistent queues is full.

What do you have in Logstash logs? Please share your logs.

@leandrojmp it stops within 1 to 2 min when the ext-se:9200 is unavailable. moreover i produce 200 log events/ sec.

WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by com.headius.backport9.modules.Modules (file:/usr/share/logstash/logstash-core/lib/jars/jruby-complete-9.2.11.1.jar) to method sun.nio.ch.NativeThread.signal(long)
WARNING: Please consider reporting this to the maintainers of com.headius.backport9.modules.Modules
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
[INFO ] 2022-11-18 17:38:46.036 [main] writabledirectory - Creating directory {:setting=>"path.queue", :path=>"/usr/share/logstash/data/queue"}
[INFO ] 2022-11-18 17:38:46.044 [main] writabledirectory - Creating directory {:setting=>"path.dead_letter_queue", :path=>"/usr/share/logstash/data/dead_letter_queue"}
[INFO ] 2022-11-18 17:38:46.293 [LogStash::Runner] runner - Starting Logstash {"logstash.version"=>"7.7.1"}
[INFO ] 2022-11-18 17:38:46.299 [LogStash::Runner] agent - No persistent UUID file found. Generating new UUID {:uuid=>"c88fdeb4-7cd2-4ec2-8551-6a6ee593027f", :path=>"/usr/share/logstash/data/uuid"}
[INFO ] 2022-11-18 17:38:47.480 [Converge PipelineAction::Create<intake>] Reflections - Reflections took 39 ms to scan 1 urls, producing 21 keys and 41 values
[INFO ] 2022-11-18 17:38:47.947 [Converge PipelineAction::Create<intake>] QueueUpgrade - No PQ version file found, upgrading to PQ v2.
[INFO ] 2022-11-18 17:38:47.953 [Converge PipelineAction::Create<buffered-es-host2>] QueueUpgrade - No PQ version file found, upgrading to PQ v2.
[INFO ] 2022-11-18 17:38:47.967 [Converge PipelineAction::Create<buffered-es-host1>] QueueUpgrade - No PQ version file found, upgrading to PQ v2.
[WARN ] 2022-11-18 17:38:48.179 [[intake]-pipeline-manager] LazyDelegatingGauge - A gauge metric of an unknown type (org.jruby.RubyArray) has been created for key: cluster_uuids. This may result in invalid serialization. It is recommended to log an issue to the responsible developer/development team.
[INFO ] 2022-11-18 17:38:48.181 [[intake]-pipeline-manager] javapipeline - Starting pipeline {:pipeline_id=>"intake", "pipeline.workers"=>3, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>375, "pipeline.sources"=>["/usr/share/logstash/config/logstash.conf"], :thread=>"#<Thread:0x6d23e253@/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:121 run>"}
[INFO ] 2022-11-18 17:38:48.706 [[buffered-es-host2]-pipeline-manager] elasticsearch - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://ext-se:9200/]}}
[INFO ] 2022-11-18 17:38:48.706 [[buffered-es-host1]-pipeline-manager] elasticsearch - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://eric-data-search-engine:9200/]}}
[WARN ] 2022-11-18 17:38:48.903 [[buffered-es-host1]-pipeline-manager] elasticsearch - Restored connection to ES instance {:url=>"http://eric-data-search-engine:9200/"}
[WARN ] 2022-11-18 17:38:48.903 [[buffered-es-host2]-pipeline-manager] elasticsearch - Restored connection to ES instance {:url=>"http://ext-se:9200/"}
[INFO ] 2022-11-18 17:38:49.147 [[intake]-pipeline-manager] beats - Beats inputs: Starting input listener {:address=>"0.0.0.0:5044"}
[INFO ] 2022-11-18 17:38:49.210 [[intake]-pipeline-manager] javapipeline - Pipeline started {"pipeline.id"=>"intake"}
[INFO ] 2022-11-18 17:38:49.252 [[buffered-es-host1]-pipeline-manager] elasticsearch - ES Output version determined {:es_version=>7}
[WARN ] 2022-11-18 17:38:49.252 [[buffered-es-host1]-pipeline-manager] elasticsearch - Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[INFO ] 2022-11-18 17:38:49.252 [[buffered-es-host2]-pipeline-manager] elasticsearch - ES Output version determined {:es_version=>7}
[WARN ] 2022-11-18 17:38:49.253 [[buffered-es-host2]-pipeline-manager] elasticsearch - Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[INFO ] 2022-11-18 17:38:49.255 [[buffered-es-host1]-pipeline-manager] elasticsearch - New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//eric-data-search-engine:9200"]}
[INFO ] 2022-11-18 17:38:49.261 [[buffered-es-host2]-pipeline-manager] elasticsearch - New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//ext-se:9200"]}
[WARN ] 2022-11-18 17:38:49.268 [[buffered-es-host1]-pipeline-manager] LazyDelegatingGauge - A gauge metric of an unknown type (org.jruby.RubyArray) has been created for key: cluster_uuids. This may result in invalid serialization. It is recommended to log an issue to the responsible developer/development team.
[INFO ] 2022-11-18 17:38:49.269 [[buffered-es-host1]-pipeline-manager] javapipeline - Starting pipeline {:pipeline_id=>"buffered-es-host1", "pipeline.workers"=>3, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>375, "pipeline.sources"=>["/usr/share/logstash/config/es1.conf"], :thread=>"#<Thread:0x5f0e97bf@/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:121 run>"}
[WARN ] 2022-11-18 17:38:49.282 [[buffered-es-host2]-pipeline-manager] LazyDelegatingGauge - A gauge metric of an unknown type (org.jruby.RubyArray) has been created for key: cluster_uuids. This may result in invalid serialization. It is recommended to log an issue to the responsible developer/development team.
[INFO ] 2022-11-18 17:38:49.282 [[buffered-es-host2]-pipeline-manager] javapipeline - Starting pipeline {:pipeline_id=>"buffered-es-host2", "pipeline.workers"=>3, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>375, "pipeline.sources"=>["/usr/share/logstash/config/es2.conf"], :thread=>"#<Thread:0x475d0518@/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:121 run>"}
[INFO ] 2022-11-18 17:38:49.295 [Ruby-0-Thread-18: /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.4.2-java/lib/logstash/outputs/elasticsearch/common.rb:41] elasticsearch - Using default mapping template
[INFO ] 2022-11-18 17:38:49.295 [Ruby-0-Thread-19: /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.4.2-java/lib/logstash/outputs/elasticsearch/common.rb:41] elasticsearch - Using default mapping template
[INFO ] 2022-11-18 17:38:49.343 [Ruby-0-Thread-18: /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.4.2-java/lib/logstash/outputs/elasticsearch/common.rb:41] elasticsearch - Attempting to install template {:manage_template=>{"index_patterns"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s", "number_of_shards"=>1}, "mappings"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}
[INFO ] 2022-11-18 17:38:49.343 [Ruby-0-Thread-19: /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.4.2-java/lib/logstash/outputs/elasticsearch/common.rb:41] elasticsearch - Attempting to install template {:manage_template=>{"index_patterns"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s", "number_of_shards"=>1}, "mappings"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}
[INFO ] 2022-11-18 17:38:49.431 [[buffered-es-host2]-pipeline-manager] javapipeline - Pipeline started {"pipeline.id"=>"buffered-es-host2"}
[INFO ] 2022-11-18 17:38:49.431 [[intake]<beats] Server - Starting server on port: 5044
[INFO ] 2022-11-18 17:38:49.433 [[buffered-es-host1]-pipeline-manager] javapipeline - Pipeline started {"pipeline.id"=>"buffered-es-host1"}


There is no error in this log, you need to share the logs when it stops sending messages.

Also, if you have 200/sec and it stops after 2 minutes after one of the output is unavailable, this could mean that your persistent queue is filling up depending on the size of your documents, and if your persistend queue is full, it will stop the output for both the pipelines.

This is an expected behaviour and is described in the documentation.

If any of the persistent queues of the downstream pipelines (in the example above, buffered-es and buffered-http ) become full, both outputs will stop.

But you need to share the logs when it stops sending data, it will show if the persisted queue is full.

@leandrojmp i am not seeing any logs with respect to the persistent queue full.
Below is the updated logs attached with some timeout and logs under Error/Warn and Info

WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by com.headius.backport9.modules.Modules (file:/usr/share/logstash/logstash-core/lib/jars/jruby-complete-9.2.11.1.jar) to method sun.nio.ch.NativeThread.signal(long)
WARNING: Please consider reporting this to the maintainers of com.headius.backport9.modules.Modules
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
[INFO ] 2022-11-18 17:38:46.036 [main] writabledirectory - Creating directory {:setting=>"path.queue", :path=>"/usr/share/logstash/data/queue"}
[INFO ] 2022-11-18 17:38:46.044 [main] writabledirectory - Creating directory {:setting=>"path.dead_letter_queue", :path=>"/usr/share/logstash/data/dead_letter_queue"}
[INFO ] 2022-11-18 17:38:46.293 [LogStash::Runner] runner - Starting Logstash {"logstash.version"=>"7.7.1"}
[INFO ] 2022-11-18 17:38:46.299 [LogStash::Runner] agent - No persistent UUID file found. Generating new UUID {:uuid=>"c88fdeb4-7cd2-4ec2-8551-6a6ee593027f", :path=>"/usr/share/logstash/data/uuid"}
[INFO ] 2022-11-18 17:38:47.480 [Converge PipelineAction::Create<intake>] Reflections - Reflections took 39 ms to scan 1 urls, producing 21 keys and 41 values
[INFO ] 2022-11-18 17:38:47.947 [Converge PipelineAction::Create<intake>] QueueUpgrade - No PQ version file found, upgrading to PQ v2.
[INFO ] 2022-11-18 17:38:47.953 [Converge PipelineAction::Create<buffered-es-host2>] QueueUpgrade - No PQ version file found, upgrading to PQ v2.
[INFO ] 2022-11-18 17:38:47.967 [Converge PipelineAction::Create<buffered-es-host1>] QueueUpgrade - No PQ version file found, upgrading to PQ v2.
[WARN ] 2022-11-18 17:38:48.179 [[intake]-pipeline-manager] LazyDelegatingGauge - A gauge metric of an unknown type (org.jruby.RubyArray) has been created for key: cluster_uuids. This may result in invalid serialization.  It is recommended to log an issue to the responsible developer/development team.
[INFO ] 2022-11-18 17:38:48.181 [[intake]-pipeline-manager] javapipeline - Starting pipeline {:pipeline_id=>"intake", "pipeline.workers"=>3, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>375, "pipeline.sources"=>["/usr/share/logstash/config/logstash.conf"], :thread=>"#<Thread:0x6d23e253@/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:121 run>"}
[INFO ] 2022-11-18 17:38:48.706 [[buffered-es-host2]-pipeline-manager] elasticsearch - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://ext-se:9200/]}}
[INFO ] 2022-11-18 17:38:48.706 [[buffered-es-host1]-pipeline-manager] elasticsearch - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://eric-data-search-engine:9200/]}}
[WARN ] 2022-11-18 17:38:48.903 [[buffered-es-host1]-pipeline-manager] elasticsearch - Restored connection to ES instance {:url=>"http://eric-data-search-engine:9200/"}
[WARN ] 2022-11-18 17:38:48.903 [[buffered-es-host2]-pipeline-manager] elasticsearch - Restored connection to ES instance {:url=>"http://ext-se:9200/"}
[INFO ] 2022-11-18 17:38:49.147 [[intake]-pipeline-manager] beats - Beats inputs: Starting input listener {:address=>"0.0.0.0:5044"}
[INFO ] 2022-11-18 17:38:49.210 [[intake]-pipeline-manager] javapipeline - Pipeline started {"pipeline.id"=>"intake"}
[INFO ] 2022-11-18 17:38:49.252 [[buffered-es-host1]-pipeline-manager] elasticsearch - ES Output version determined {:es_version=>7}
[WARN ] 2022-11-18 17:38:49.252 [[buffered-es-host1]-pipeline-manager] elasticsearch - Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[INFO ] 2022-11-18 17:38:49.252 [[buffered-es-host2]-pipeline-manager] elasticsearch - ES Output version determined {:es_version=>7}
[WARN ] 2022-11-18 17:38:49.253 [[buffered-es-host2]-pipeline-manager] elasticsearch - Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[INFO ] 2022-11-18 17:38:49.255 [[buffered-es-host1]-pipeline-manager] elasticsearch - New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//eric-data-search-engine:9200"]}
[INFO ] 2022-11-18 17:38:49.261 [[buffered-es-host2]-pipeline-manager] elasticsearch - New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//ext-se:9200"]}
[WARN ] 2022-11-18 17:38:49.268 [[buffered-es-host1]-pipeline-manager] LazyDelegatingGauge - A gauge metric of an unknown type (org.jruby.RubyArray) has been created for key: cluster_uuids. This may result in invalid serialization.  It is recommended to log an issue to the responsible developer/development team.
[INFO ] 2022-11-18 17:38:49.269 [[buffered-es-host1]-pipeline-manager] javapipeline - Starting pipeline {:pipeline_id=>"buffered-es-host1", "pipeline.workers"=>3, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>375, "pipeline.sources"=>["/usr/share/logstash/config/es1.conf"], :thread=>"#<Thread:0x5f0e97bf@/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:121 run>"}
[WARN ] 2022-11-18 17:38:49.282 [[buffered-es-host2]-pipeline-manager] LazyDelegatingGauge - A gauge metric of an unknown type (org.jruby.RubyArray) has been created for key: cluster_uuids. This may result in invalid serialization.  It is recommended to log an issue to the responsible developer/development team.
[INFO ] 2022-11-18 17:38:49.282 [[buffered-es-host2]-pipeline-manager] javapipeline - Starting pipeline {:pipeline_id=>"buffered-es-host2", "pipeline.workers"=>3, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>375, "pipeline.sources"=>["/usr/share/logstash/config/es2.conf"], :thread=>"#<Thread:0x475d0518@/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:121 run>"}
[INFO ] 2022-11-18 17:38:49.295 [Ruby-0-Thread-18: /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.4.2-java/lib/logstash/outputs/elasticsearch/common.rb:41] elasticsearch - Using default mapping template
[INFO ] 2022-11-18 17:38:49.295 [Ruby-0-Thread-19: /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.4.2-java/lib/logstash/outputs/elasticsearch/common.rb:41] elasticsearch - Using default mapping template
[INFO ] 2022-11-18 17:38:49.343 [Ruby-0-Thread-18: /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.4.2-java/lib/logstash/outputs/elasticsearch/common.rb:41] elasticsearch - Attempting to install template {:manage_template=>{"index_patterns"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s", "number_of_shards"=>1}, "mappings"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}
[INFO ] 2022-11-18 17:38:49.343 [Ruby-0-Thread-19: /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.4.2-java/lib/logstash/outputs/elasticsearch/common.rb:41] elasticsearch - Attempting to install template {:manage_template=>{"index_patterns"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s", "number_of_shards"=>1}, "mappings"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}
[INFO ] 2022-11-18 17:38:49.431 [[buffered-es-host2]-pipeline-manager] javapipeline - Pipeline started {"pipeline.id"=>"buffered-es-host2"}
[INFO ] 2022-11-18 17:38:49.431 [[intake]<beats] Server - Starting server on port: 5044
[INFO ] 2022-11-18 17:38:49.433 [[buffered-es-host1]-pipeline-manager] javapipeline - Pipeline started {"pipeline.id"=>"buffered-es-host1"}
[INFO ] 2022-11-18 17:38:49.448 [Agent thread] agent - Pipelines running {:count=>3, :running_pipelines=>[:"buffered-es-host1", :"buffered-es-host2", :intake], :non_running_pipelines=>[]}
[INFO ] 2022-11-18 17:38:49.499 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600}
[WARN ] 2022-11-21 01:54:55.077 [[buffered-es-host2]>worker0] elasticsearch - Marking url as dead. Last error: [LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError] Elasticsearch Unreachable: [http://ext-se:9200/][Manticore::ResolutionFailure] ext-se {:url=>http://ext-se:9200/, :error_message=>"Elasticsearch Unreachable: [http://ext-se:9200/][Manticore::ResolutionFailure] ext-se", :error_class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError"}
[WARN ] 2022-11-21 01:54:55.078 [[buffered-es-host2]>worker1] elasticsearch - Marking url as dead. Last error: [LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError] Elasticsearch Unreachable: [http://ext-se:9200/][Manticore::ResolutionFailure] ext-se {:url=>http://ext-se:9200/, :error_message=>"Elasticsearch Unreachable: [http://ext-se:9200/][Manticore::ResolutionFailure] ext-se", :error_class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError"}
[WARN ] 2022-11-21 01:54:55.079 [[buffered-es-host2]>worker2] elasticsearch - Marking url as dead. Last error: [LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError] Elasticsearch Unreachable: [http://ext-se:9200/][Manticore::ResolutionFailure] ext-se: Name or service not known {:url=>http://ext-se:9200/, :error_message=>"Elasticsearch Unreachable: [http://ext-se:9200/][Manticore::ResolutionFailure] ext-se: Name or service not known", :error_class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError"}
[ERROR] 2022-11-21 01:54:55.085 [[buffered-es-host2]>worker0] elasticsearch - Attempted to send a bulk request to elasticsearch' but Elasticsearch appears to be unreachable or down! {:error_message=>"Elasticsearch Unreachable: [http://ext-se:9200/][Manticore::ResolutionFailure] ext-se", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError", :will_retry_in_seconds=>2}
[ERROR] 2022-11-21 01:54:55.085 [[buffered-es-host2]>worker2] elasticsearch - Attempted to send a bulk request to elasticsearch' but Elasticsearch appears to be unreachable or down! {:error_message=>"Elasticsearch Unreachable: [http://ext-se:9200/][Manticore::ResolutionFailure] ext-se: Name or service not known", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError", :will_retry_in_seconds=>2}
[ERROR] 2022-11-21 01:54:55.085 [[buffered-es-host2]>worker1] elasticsearch - Attempted to send a bulk request to elasticsearch' but Elasticsearch appears to be unreachable or down! {:error_message=>"Elasticsearch Unreachable: [http://ext-se:9200/][Manticore::ResolutionFailure] ext-se", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError", :will_retry_in_seconds=>2}
[WARN ] 2022-11-21 01:54:56.674 [Ruby-0-Thread-17: /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.4.2-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:241] elasticsearch - Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://ext-se:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://ext-se:9200/][Manticore::ResolutionFailure] ext-se"}
[ERROR] 2022-11-21 01:54:57.120 [[buffered-es-host2]>worker0] elasticsearch - Attempted to send a bulk request to elasticsearch, but no there are no living connections in the connection pool. Perhaps Elasticsearch is unreachable or down? {:error_message=>"No Available connections", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError", :will_retry_in_seconds=>4}
[ERROR] 2022-11-21 01:54:57.125 [[buffered-es-host2]>worker2] elasticsearch - Attempted to send a bulk request to elasticsearch, but no there are no living connections in the connection pool. Perhaps Elasticsearch is unreachable or down? {:error_message=>"No Available connections", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError", :will_retry_in_seconds=>4}
[ERROR] 2022-11-21 01:54:57.126 [[buffered-es-host2]>worker1] elasticsearch - Attempted to send a bulk request to elasticsearch, but no there are no living connections in the connection pool. Perhaps Elasticsearch is unreachable or down? {:error_message=>"No Available connections", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError", :will_retry_in_seconds=>4}
[ERROR] 2022-11-21 01:55:01.142 [[buffered-es-host2]>worker0] elasticsearch - Attempted to send a bulk request to elasticsearch, but no there are no living connections in the connection pool. Perhaps Elasticsearch is unreachable or down? {:error_message=>"No Available connections", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError", :will_retry_in_seconds=>8}
[ERROR] 2022-11-21 01:55:25.185 [[buffered-es-host2]>worker2] elasticsearch - Attempted to send a bulk request to elasticsearch, but no there are no living connections in the connection pool. Perhaps Elasticsearch is unreachable or down? {:error_message=>"No Available connections", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError", :will_retry_in_seconds=>32}
[WARN ] 2022-11-21 01:55:26.750 [Ruby-0-Thread-17: /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.4.2-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:241] elasticsearch - Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://ext-se:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://ext-se:9200/][Manticore::ResolutionFailure] ext-se: Name or service not known"}
[WARN ] 2022-11-21 01:55:31.754 [Ruby-0-Thread-17: /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.4.2-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:241] elasticsearch - Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://ext-se:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://ext-se:9200/][Manticore::ResolutionFailure] ext-se"}
[WARN ] 2022-11-21 01:55:36.763 [Ruby-0-Thread-17: /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.4.2-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:241] elasticsearch - Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://ext-se:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://ext-se:9200/][Manticore::ResolutionFailure] ext-se: Name or service not known"}
[WARN ] 2022-11-21 01:55:41.767 [Ruby-0-Thread-17: /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.4.2-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:241] elasticsearch - Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://ext-se:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://ext-se:9200/][Manticore::ResolutionFailure] ext-se"}
[WARN ] 2022-11-21 01:55:46.774 [Ruby-0-Thread-17: /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.4.2-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:241] elasticsearch - Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://ext-se:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://ext-se:9200/][Manticore::ResolutionFailure] ext-se: Name or service not known"}
[WARN ] 2022-11-21 01:55:51.779 [Ruby-0-Thread-17: /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.4.2-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:241] elasticsearch - Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://ext-se:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://ext-se:9200/][Manticore::ResolutionFailure] ext-se"}
[WARN ] 2022-11-21 01:55:56.787 [Ruby-0-Thread-17: /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.4.2-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:241] elasticsearch - Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://ext-se:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://ext-se:9200/][Manticore::ResolutionFailure] ext-se: Name or service not known"}
[ERROR] 2022-11-21 01:55:57.184 [[buffered-es-host2]>worker1] elasticsearch - Attempted to send a bulk request to elasticsearch, but no there are no living connections in the connection pool. Perhaps Elasticsearch is unreachable or down? {:error_message=>"No Available connections", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError", :will_retry_in_seconds=>64}
[ERROR] 2022-11-21 01:57:01.193 [[buffered-es-host2]>worker0] elasticsearch - Attempted to send a bulk request to elasticsearch, but no there are no living connections in the connection pool. Perhaps Elasticsearch is unreachable or down? {:error_message=>"No Available connections", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError", :will_retry_in_seconds=>64}
[ERROR] 2022-11-21 01:57:01.202 [[buffered-es-host2]>worker2] elasticsearch - Attempted to send a bulk request to elasticsearch, but no there are no living connections in the connection pool. Perhaps Elasticsearch is unreachable or down? {:error_message=>"No Available connections", 
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.4.2-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:241] elasticsearch - Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://ext-se:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://ext-se:9200/][Manticore::ResolutionFailure] ext-se: Name or service not known"}
[WARN ] 2022-11-21 01:58:21.997 [Ruby-0-Thread-17: /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.4.2-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:241] elasticsearch - Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://ext-se:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://ext-se:9200/][Manticore::ResolutionFailure] ext-se"}
[WARN ] 2022-11-21 01:58:27.019 [Ruby-0-Thread-17: /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.4.2-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:241] elasticsearch - Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://ext-se:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://ext-se:9200/][Manticore::ResolutionFailure] ext-se: Name or service not known"}
[WARN ] 2022-11-21 01:58:32.032 [Ruby-0-Thread-17: /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.4.2-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:241] elasticsearch - Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://ext-se:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://ext-se:9200/][Manticore::ResolutionFailure] ext-se"}
[WARN ] 2022-11-21 01:58:37.041 [Ruby-0-Thread-17: /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.4.2-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:241] elasticsearch - Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://ext-se:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://ext-se:9200/][Manticore::ResolutionFailure] ext-se: Name or service not known"}
[WARN ] 2022-11-21 01:58:42.046 [Ruby-0-Thread-17: /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.4.2-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:241] elasticsearch - Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://ext-se:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://ext-se:9200/][Manticore::ResolutionFailure] ext-se"}
[WARN ] 2022-11-21 01:58:47.054 [Ruby-0-Thread-17: /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.4.2-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:241] elasticsearch - Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://ext-se:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://ext-se:9200/][Manticore::ResolutionFailure] ext-se: Name or service not known"}
[WARN ] 2022-11-21 01:58:52.061 [Ruby-0-Thread-17: /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.4.2-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:241] elasticsearch - Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://ext-se:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://ext-se:9200/][Manticore::ResolutionFailure] ext-se"}
[WARN ] 2022-11-21 01:58:57.070 [Ruby-0-Thread-17: /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.4.2-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:241] elasticsearch - Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://ext-se:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://ext-se:9200/][Manticore::ResolutionFailure] ext-se: Name or service not known"}
[WARN ] 2022-11-21 01:59:02.075 [Ruby-0-Thread-17: /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.4.2-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:241] elasticsearch - Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://ext-se:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://ext-se:9200/][Manticore::ResolutionFailure] ext-se"}
[WARN ] 2022-11-21 01:59:07.084 [Ruby-0-Thread-17: /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.4.2-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:241] elasticsearch - Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://ext-se:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://ext-se:9200/][Manticore::ResolutionFailure] ext-se: Name or service not known"}
[ERROR] 2022-11-21 01:59:09.221 [[buffered-es-host2]>worker1] elasticsearch - Attempted to send a bulk request to elasticsearch, but no there are no living connections in the connection pool. Perhaps Elasticsearch is unreachable or down? {:error_message=>"No Available connections", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError", :will_retry_in_seconds=>64}
[ERROR] 2022-11-21 01:59:09.222 [[buffered-es-host2]>worker0] elasticsearch - Attempted to send a bulk request to elasticsearch, but no there are no living connections in the connection pool. Perhaps Elasticsearch is unreachable or down? {:error_message=>"No Available connections", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError", :will_retry_in_seconds=>64}
[ERROR] 2022-11-21 01:59:09.226 [[buffered-es-host2]>worker2] elasticsearch - Attempted to send a bulk request to elasticsearch, but no there are no living connections in the connection pool. Perhaps Elasticsearch is unreachable or down? {:error_message=>"No Available connections", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError", :will_retry_in_seconds=>64}
[WARN ] 2022-11-21 01:59:12.088 [Ruby-0-Thread-17: /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.4.2-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:241] elasticsearch - Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://ext-se:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://ext-se:9200/][Manticore::ResolutionFailure] ext-se"}
[WARN ] 2022-11-21 01:59:17.097 [Ruby-0-Thread-17: /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.4.2-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:241] elasticsearch - Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://ext-se:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://ext-se:9200/][Manticore::ResolutionFailure] ext-se: Name or service not known"}
[WARN ] 2022-11-21 01:59:22.104 [Ruby-0-Thread-17: /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.4.2-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:241] elasticsearch - Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://ext-se:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://ext-se:9200/][Manticore::ResolutionFailure] ext-se"}
[WARN ] 2022-11-21 01:59:27.112 [Ruby-0-Thread-17: /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.4.2-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:241] elasticsearch - Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://ext-se:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://ext-se:9200/][Manticore::ResolutionFailure] ext-se: Name or service not known"}
[WARN ] 2022-11-21 01:59:32.115 [Ruby-0-Thread-17: /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.4.2-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:241] elasticsearch - Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://ext-se:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://ext-se:9200/][Manticore::ResolutionFailure] ext-se"}
[WARN ] 2022-11-21 01:59:37.130 [Ruby-0-Thread-17: /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.4.2-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:241] elasticsearch - Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://ext-se:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://ext-se:9200/][Manticore::ResolutionFailure] ext-se: Name or service not known"}
[WARN ] 2022-11-21 01:59:42.134 [Ruby-0-Thread-17: /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.4.2-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:241] elasticsearch - Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://ext-se:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://ext-se:9200/][Manticore::ResolutionFailure] ext-se"}
[WARN ] 2022-11-21 01:59:47.149 [Ruby-0-Thread-17: /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.4.2-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:241] elasticsearch - Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://ext-se:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://ext-se:9200/][Manticore::ResolutionFailure] ext-se: Name or service not known"}
[WARN ] 2022-11-21 01:59:52.153 [Ruby-0-Thread-17: /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.4.2-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:241] elasticsearch - Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://ext-se:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://ext-se:9200/][Manticore::ResolutionFailure] ext-se"}
[WARN ] 2022-11-21 01:59:57.160 [Ruby-0-Thread-17: /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.4.2-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:241] elasticsearch - Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://ext-se:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://ext-se:9200/][Manticore::ResolutionFailure] ext-se: Name or service not known"}
[WARN ] 2022-11-21 02:00:02.164 [Ruby-0-Thread-17: /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.4.2-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:241] elasticsearch - Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://ext-se:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://ext-se:9200/][Manticore::ResolutionFailure] ext-se"}
[WARN ] 2022-11-21 02:00:07.171 [Ruby-0-Thread-17: /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.4.2-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:241] elasticsearch - Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://ext-se:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://ext-se:9200/][Manticore::ResolutionFailure] ext-se: Name or service not known"}
[WARN ] 2022-11-21 02:00:12.176 [Ruby-0-Thread-17: /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.4.2-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:241] elasticsearch - Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://ext-se:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://ext-se:9200/][Manticore::ResolutionFailure] ext-se"}
[ERROR] 2022-11-21 02:00:13.234 [[buffered-es-host2]>worker0] elasticsearch - Attempted to send a bulk request to elasticsearch, but no there are no living connections in the connection pool. Perhaps Elasticsearch is unreachable or down? {:error_message=>"No Available connections", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError", :will_retry_in_seconds=>64}
[WARN ] 2022-11-21 02:00:37.255 [Ruby-0-Thread-17: /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.4.2-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:241] elasticsearch - Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://ext-se:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://ext-se:9200/][Manticore::ResolutionFailure] ext-se: Name or service not known"}
[WARN ] 2022-11-21 02:00:42.259 [Ruby-0-Thread-17: /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.4.2-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:241] elasticsearch - Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://ext-se:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://ext-se:9200/][Manticore::ResolutionFailure] ext-se"}
[WARN ] 2022-11-21 02:00:47.267 [Ruby-0-Thread-17: /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.4.2-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:241] elasticsearch - Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://ext-se:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://ext-se:9200/][Manticore::ResolutionFailure] ext-se: Name or service not known"}
[INFO ] 2022-11-21 07:06:30.370 [[buffered-es-host1]>worker2] elasticsearch - retrying failed action with response code: 503 ({"type"=>"unavailable_shards_exception", "reason"=>"[logstash-beta-2022.11.21][3] primary shard is not active Timeout: [1m], request: [BulkShardRequest [[logstash-beta-2022.11.21][3]] containing [15] requests]"})
[INFO ] 2022-11-21 07:06:30.370 [[buffered-es-host1]>worker2] elasticsearch - retrying failed action with response code: 503 ({"type"=>"unavailable_shards_exception", "reason"=>"[logstash-beta-2022.11.21][1] primary shard is not active Timeout: [1m], request: [BulkShardRequest [[logstash-beta-2022.11.21][1]] containing [14] requests]"})
[INFO ] 2022-11-21 07:06:30.370 [[buffered-es-host1]>worker2] elasticsearch - retrying failed action with response code: 503 ({"type"=>"unavailable_shards_exception", "reason"=>"[logstash-beta-2022.11.21][0] primary shard is not active Timeout: [1m], request: [BulkShardRequest [[logstash-beta-2022.11.21][0]] containing [9] requests]"})
[INFO ] 2022-11-21 07:06:30.370 [[buffered-es-host1]>worker2] elasticsearch - retrying failed action with response code: 503 ({"type"=>"unavailable_shards_exception", "reason"=>"[logstash-beta-2022.11.21][3] primary shard is not active Timeout: [1m], request: [BulkShardRequest [[logstash-beta-2022.11.21][3]] containing [15] requests]"})
[INFO ] 2022-11-21 07:06:30.370 [[buffered-es-host1]>worker2] elasticsearch - Retrying individual bulk actions that failed or were rejected by the previous bulk request. {:count=>46}
[WARN ] 2022-11-21 07:06:32.738 [Ruby-0-Thread-17: /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-elasticsearch-10.4.2-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:241] elasticsearch - Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://ext-se:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://ext-se:9200/][Manticore::ResolutionFailure] ext-se"}

Thanks
Ravi R

@leandrojmp is there a way that I could find the pipeline size and stop the message getting sent to receivers when the capacity is full.

So that the pipeline queue will be healthy and also the loss of logs will only be for the unhealthy receiver.

There is not.

Also, is it everything in your logs? There is nothing in what you shared about persistent queues being full. Is there more logs that you didn't share?

I particularly find that the output isolator name is a little misleading as it gives you the impression that the pipelines are fully isolated from each other, but they are not and the documentation it is clear on that.

Both the pipelines uses persistent queues, and if the queues of one of the ouput is full, it will also stop the other output.

To truly isolate two or more pipelines you would need one pipeline to send the data to an intermediate point, and the other pipelines would consume the data from that intermediate point.

You can do that using Kafka, you would need one pipeline to send data to Kafka and them your other two pipelines would consume from Kafka.

1 Like

@leandrojmp

So if I need to set up kafka in which my Logstash send and receive log events.
Should i need an external kafka set up with some defined physical memory to hold log.
Or is the input and output plugins under Logstash alone is enough to do this.

Thanks.

You will need an external Kafka cluster to hold your messages, the inputs and outputs will point to that Kafka cluster.

1 Like