Attempted to resurrect connection to dead ES instance, but got an error

Attempted to resurrect connection to dead ES instance, but got an error. {:error_type=>LogStash::Outputs::Elasticsearch::HttpClient::Pool::BadResponseCodeError, :error=>"Got response code '401' contacting Elasticsearch at URL

Hi @snalaband,

Welcome to the community! Can you check your logstash configuration? It looks like you're getting an unauthorized error. I would have a look at this thread for example configuration that should solve the issue.

Welcome to our community! :smiley:

Please don't just post an error with no other information. Take a look at Dec 10th, 2022: [EN] Asking top notch technical questions to get you help quicker! and update your post with more information as it'll allow us to help you better.

1 Like

Hi Carly ,
Thanks for your reply
this is my listener log stash configuration
</ logstashPipeline:
listener.conf: |
input {
beats {
port => 5044
add_field => {"input_source" => "beats"}
}

      http {
        port => 8280
        add_field => {"input_source" => "http"}
      }

      http {
        port => 443
        add_field => {"input_source" => "https"}
      }

      syslog {
        port => 5144
        add_field => {"input_source" => "syslog"}
      }

      graphite {
        port => 5244
        add_field => {"input_source" => "graphite"}
      }

      tcp {
        port => 12345
        add_field => {"input_source" => "tcp"}
      }

      gelf {
        port => 12201
        host => "X.X.X.X"
        use_tcp => true
        use_udp => true
        add_field => {"input_source" => "gelf"}
      }
    }
    output {
      kafka {
        id => "unfiltered-logs"
        bootstrap_servers => "X.X.X.X"
        codec => json
        max_request_size => 100000000
        buffer_memory => 100000000
        topic_id => "kubernetes"
      }
    }

extraEnvs:
  - name: XPACK_MONITORING_ELASTICSEARCH_HOSTS
    value: elk-URL
  - name: XPACK_MONITORING_ELASTICSEARCH_PASSWORD
    valueFrom:
      secretKeyRef:
        name: es-creds
        key: password
  - name: XPACK_MONITORING_ELASTICSEARCH_SSL_CERTIFICATE_AUTHORITY
    value: /config/cacert.pem
  - name: XPACK_MONITORING_ELASTICSEARCH_USERNAME
    value: logstash_system
  - name: NODE_NAME
    value: k8s-listener
  - name: LOG_LEVEL
    value: info
  - name: PIPELINE_BATCH_SIZE
    value: "150"
  - name: PIPELINE_BATCH_DELAY
    value: "1000"
  - name: kafka server
    value: kafka url
  - name: CONFIG_TEST_AND_EXIT
    value: "false"

logstashJavaOpts: "-Xmx3g -Xms3g -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75  -XX:+UseCMSInitiatingOccupancyOnly -Dfile.encoding=UTF-8 -Djruby.compile.invokedynamic=true  -Djruby.jit.threshold=0 -XX:+HeapDumpOnOutOfMemoryError -Djava.security.egd=file:/dev/urandom -Djava.security.networkaddress.cache.negative.ttl=0 -Djava.security.networkaddress.cache.ttl=300"

resources:
  requests:
    cpu: 500m
    memory: 4Gi
  limits:
    cpu: "2"
    memory: 6Gi

volumeClaimTemplate: {}

rbac:
  create: true
  serviceAccountName: ""

podSecurityPolicy:
  create: false
  name: ""
  spec:
    privileged: true
    fsGroup:
      rule: RunAsAny
    runAsUser:
      rule: RunAsAny
    seLinux:
      rule: RunAsAny
    supplementalGroups:
      rule: RunAsAny
    volumes:
      - secret
      - configMap
      - persistentVolumeClaim


service:
  annotations: {}
  type: LoadBalancer
  loadBalancerIP: load balancer ip
  ports:
    - name: beats
      port: 5044   />

Thanks for sharing @snalaband. I don't see anything obviously wrong from your configuration. As @warkolm says some more information on your issue in addition to the error would be useful.

Can you give us some more details such as Elasticsearch and Logstash versions and background for resurrecting the dead instance as per the tips and tricks we have for asking questions to the community that are shared above.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.