Handle short time connection issues - Filebeat

Hello Team,

we are running 3 node Elasticsearch cluster. All our k8s cluster has filebeats configured - which pushes the logs to Elasticsearch.

We have a requirement of upgrading the the Elasticsearch. when I performed a POC upgradation, I observed that there were some logs missing at the time of upgrade. upgrade duration is 2 hours

we found few following errors at this time period.

no server is available to handle this request

Can we do something on filebeat level in such a way that filebeat should hold the data for certain time before it releases it? or filebeat should wait till Elasticsearch is available for certain period of time?

This is input config

    - type: docker
      - '*'
      - decode_json_fields:
          fields: ['message']
          target: ""
          process_array: false
          max_depth: 1
          overwrite_keys: true
          add_error_key: true                      
      - add_kubernetes_metadata: ~

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.