Logstash on EKS and AWS elastic - 401 auth errors

Hi we are getting 401 auth errors as of mid January. We have reached out to AWS support team and they were unable to help. We have had logstash running in production environment for ~6 months, but when we went to update a few indexes in dev we were unable to re-deploy it to dev/lab due to 401 auth issues. We are using the logstash helm charts from here: https://helm.elastic.co/

Config for production:

    image: "docker.elastic.co/logstash/logstash-oss"
    imageTag: "7.9.3"

    persistence:
      enabled: true

    logstashConfig:
      logstash.yml: |
        http.host: 0.0.0.0
        pipeline.workers: {{ .Values.workers }}
        pipeline.batch.size: {{ .Values.batchSize }}
        log.level: error

    logstashPipeline: 
      logstash.conf: |
        stdout { codec => rubydebug }
    resources:
      requests:
        cpu: {{ .Values.cpuRequest }}
        memory: {{ .Values.memoryRequest }}
      limits:
        cpu: {{ .Values.cpuLimit }}
        memory: {{ .Values.memoryLimit }}

    logstashJavaOpts: {{ .Values.java_opts }}

    service: 
    #  annotations: {}
      type: ClusterIP
      ports:
        - name: beats
          port: 5044
          protocol: TCP
          targetPort: 5044

    logstashPipeline:
      uptime.conf: |
        output { 
          if (([kubernetes][namespace] == "something-namespace")) {
            elasticsearch {
              hosts => ["{{ .Values.host }}:443"]
              manage_template => false
              index => "some-namespace.%{+YYYY.MM.dd}.{{ .Values.env }}"
              user => "{{ .Values.user }}"
              password => "{{ .Values.password }}"
              ilm_enabled => "false"
            }
          } else if [kubernetes][labels][app] {
            elasticsearch {
              hosts => ["{{ .Values.host }}:443"]
              manage_template => false
              index => "%{[kubernetes][namespace]}.%{[kubernetes][labels][app]}.%{+YYYY.MM.dd}.{{ .Values.env }}"
              user => "{{ .Values.user }}"
              password => "{{ .Values.password }}"
              ilm_enabled => "false"
            }
          } else if [kubernetes][labels][app.kubernetes.io/instance] {
            elasticsearch {
              hosts => ["{{ .Values.host }}:443"]
              manage_template => false
              index => "%{[kubernetes][namespace]}.%{[kubernetes][labels][app.kubernetes.io/instance]}.{{ .Values.env }}"
              user => "{{ .Values.user }}"
              password => "{{ .Values.password }}"
              ilm_enabled => "false"
            }
          }
        }

The config we are using for dev/lab:

    image: "docker.elastic.co/logstash/logstash-oss"
    imageTag: "7.9.3"

    persistence:
      enabled: true

    logstashConfig:
      logstash.yml: |
        http.host: 0.0.0.0
        pipeline.workers: {{ .Values.workers }}
        pipeline.batch.size: {{ .Values.batchSize }}
        log.level: error
      log4j2.properties: |
        logger.elasticsearchoutput.name = logstash.outputs.elasticsearch
        logger.elasticsearchoutput.level = debug

    resources:
      requests:
        cpu: {{ .Values.cpuRequest }}
        memory: {{ .Values.memoryRequest }}
      limits:
        cpu: {{ .Values.cpuLimit }}
        memory: {{ .Values.memoryLimit }}

    logstashJavaOpts: {{ .Values.java_opts }}

    service: 
    #  annotations: {}
      type: ClusterIP
      ports:
        - name: beats
          port: 5044
          protocol: TCP
          targetPort: 5044

    logstashPipeline:
      logstash.conf: |
        input {
          beats {
            port => 5044
          }
        }
        output { 
          if (([kubernetes][namespace] == "some-namespace")) {
            elasticsearch {
              hosts => ["{{ .Values.host }}:443"]
              ssl => true
              manage_template => false
              index => "some-namespace.%{+YYYY.MM.dd}.{{ .Values.env }}"
              user => "{{ .Values.user }}"
              password => "{{ .Values.password }}"
              ilm_enabled => "false"
            }
          } else if [kubernetes][labels][app] {
            elasticsearch {
              hosts => ["{{ .Values.host }}:443"]
              ssl => true
              manage_template => false
              index => "%{[kubernetes][namespace]}.%{[kubernetes][labels][app]}.%{+YYYY.MM.dd}.{{ .Values.env }}"
              user => "{{ .Values.user }}"
              password => "{{ .Values.password }}"
              ilm_enabled => "false"
            }
          } else if [kubernetes][labels][app.kubernetes.io/instance] {
            elasticsearch {
              hosts => ["{{ .Values.host }}:443"]
              ssl => true
              manage_template => false
              index => "%{[kubernetes][namespace]}.%{[kubernetes][labels][app.kubernetes.io/instance]}.{{ .Values.env }}"
              user => "{{ .Values.user }}"
              password => "{{ .Values.password }}"
              ilm_enabled => "false"
            }
          }
        }

I enabled debugging to see what was causing the issue and I see a few warnings about an openssl package:

    WARNING: An illegal reflective access operation has occurred
    WARNING: Illegal reflective access by org.jruby.ext.openssl.SecurityHelper (file:/tmp/jruby-1/jruby947080253144246358jopenssl.jar) to field java.security.MessageDigest.provider
    WARNING: Please consider reporting this to the maintainers of org.jruby.ext.openssl.SecurityHelper
    WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
    WARNING: All illegal access operations will be denied in a future release
    Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties

Could this be causing some auth issue with AWS elastic? Usually its something to do with Xpack, but all of those are already disabled and working in prod env. Logstash is configured to use a master user on elastic which AWS support confirmed is properly setup.

We also tried version changing and no version of logstash on https://helm.elastic.co/ will work.

Update: AWS support sent over instructions to setup logstash from scratch on EC2 using a 6.x version and we faced the same 401 error.

AWS support tried replicating the problem and also faced the same 401 auth errors as us. The ticket has been escalated to the next level and appears to be on AWS's side.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.