ELK Elasticsearch - bind_dn - secure_bind_password: LDAP - Kibana anldapuser:password not working

ELK version: 7.17.10

  1. Deployed elasticsearch and Kibana in K8s cluster.
  2. Both ES and Kibana are up.
Charts
----------
NAME         	NAMESPACE	REVISION	UPDATED                                	STATUS  	CHART                        	APP VERSION
elasticsearch	logging  	1       	2023-10-09 22:36:22.805826199 +0000 UTC	deployed	elasticsearch-7.17.10-spacex.6.3	7.17.10    
kibana       	logging  	1       	2023-10-09 22:43:26.513922363 +0000 UTC	deployed	kibana-7.17.10-spacex.6.1       	7.17.10    


Pods
---------
NAME                                            READY   STATUS    RESTARTS   AGE
elk-master-0                                      3/3     Running   0          45h
elk-master-1                                      3/3     Running   0          45h
elk-master-2                                      3/3     Running   0          45h
kibana-kibana-c479b8d7f-ddhzs                                         3/3     Running   0          46h

  1. Using native username/password (local users), I'm able to login to Kibana UI.
    a. elastic user login works as well.

  2. LDAP server: ldaps://ldap (i.e. 636 as it's secure).

  3. User that I'm trying to enter in Kibana UI is: nklbobbyb and it's password.

  4. Using ldapsearch or ldapwhoami, when I'm using CLI to verify if in LDAP user userSurname exists or verifies using it's password, it works, I don't get any error for ldapsearch/ldapwhoami CLI

  5. My goal is to setup LDAP based access for Kibana login (which uses Elasticsearch .yml configuration) using xpack > bind_dn and "secure_bind_password" has been injected successfully into ES keystore at the right path i.e. xpack.security.authc.realms.ldap.ldap1.secure_bind_password
    a. i.e. running elasticsearch-keystore show shows the password for bind_dn user (elk-bind)

  6. I can curl to ES url :9200 and it shows cluster is healthy etc. The same works for /_cluster/settings and /_cluster/health as well.

$ curl http://127.0.0.1:9200 -u elastic:${p} -v

* About to connect() to 127.0.0.1 port 9200 (#0)
*   Trying 127.0.0.1...
* Connected to 127.0.0.1 (127.0.0.1) port 9200 (#0)
* Server auth using Basic with user 'elastic'
> GET / HTTP/1.1
> Authorization: Basic Kmxkc3RpYzpGoodLuckWithThat9OEotQ3e6
> User-Agent: curl/7.29.0
> Host: 127.0.0.1:9200
> Accept: */*
> 
< HTTP/1.1 200 OK
< X-elastic-product: Elasticsearch
< content-type: application/json; charset=UTF-8
< content-length: 544
< 
{
  "name" : "elk-master-0",
  "cluster_name" : "elk_cluster",
  "cluster_uuid" : "31tgxJzTSdGRbn9PbWDt7k",
  "version" : {
    "number" : "7.17.10",
    "build_flavor" : "default",
    "build_type" : "docker",
    "build_hash" : "fecd68e3150eda0c307ab9a9d7557f5d5fd71349",
    "build_date" : "2023-04-23T05:33:18.138275597Z",
    "build_snapshot" : false,
    "lucene_version" : "8.11.1",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}
  1. I use the following 2 files (yml) while running helm install <app> <chart.tgz> ---values <path/to/es_or_kibana_file.yml> helm chart for elasticsearch and kibana charts. YML file is used as overrides or for passing key=values.

NOTE: Both charts come up fine and I can see pods for both are in Running state in Rancher > my Cluster > Workloads > Pods (see output above bullet 2).

Following is my elasticsearch yml file for xpack settings:

clusterName: "elk_cluster"
masterService: "elk-master"
minimumMasterNodes: 1


podAnnotations:
  consul.hashicorp.com/connect-inject: true
  consul.hashicorp.com/connect-service: elasticsearch
  vault.hashicorp.com/agent-inject: true
  vault.hashicorp.com/agent-run-as-user: 22020
  vault.hashicorp.com/agent-run-as-group: 22020
  vault.hashicorp.com/role: elasticsearch
  vault.hashicorp.com/auth-path: auth/kubernetes-secure
  vault.hashicorp.com/tls-skip-verify: true
  vault.hashicorp.com/agent-inject-secret-elk-bind: "secret/data/passwords/ldap/elk-bind"
  vault.hashicorp.com/agent-inject-perms-elk-bind: "0644"
  vault.hashicorp.com/agent-inject-template-elk-bind: |
    {{- with secret "secret/data/passwords/ldap/elk-bind" -}}
    {{ .Data.data.value }}
    {{- end }}
  vault.hashicorp.com/agent-inject-secret-elastic: secret/data/passwords/elk/elastic
  vault.hashicorp.com/agent-inject-command-elastic: chmod 400 /vault/secrets/elastic
  vault.hashicorp.com/agent-inject-template-elastic: |
    {{- with secret "secret/data/passwords/elk/elastic" -}}
    {{ .Data.data.value }}
    {{- end }}
  vault.hashicorp.com/agent-inject-secret-kibana: secret/data/passwords/elk/kibana
  vault.hashicorp.com/agent-inject-command-kibana: chmod 400 /vault/secrets/kibana
  vault.hashicorp.com/agent-inject-template-kibana: |
    {{- with secret "secret/data/passwords/elk/kibana" -}}
    {{ .Data.data.value }}
    {{- end }}

extraEnvs:
  - name: ELASTIC_USERNAME
    value: elastic 
  - name: ELASTIC_PASSWORD_FILE
    value: /vault/secrets/elastic

extraInitContainers:
  - name: ulimit-1
    image: artifactory:8443/docker/elastic/elasticsearch:7.17.10
    command: ["/bin/sh", "-c", "ulimit -n 65536"]
    securityContext:
      privileged: true
  - name: ulimit-2
    image: artifactory:8443/docker/elastic/elasticsearch:7.17.10
    command: ["/bin/sh", "-c", "ulimit -u 4096"]
    securityContext:
      privileged: true
  - name: wait-for-consul
    image: artifactory:8443/docker/elastic/elasticsearch:7.17.10
    command:
    - "/bin/bash"
    - "-c"
    - |
      consul_url=https://$CONSUL_NAME:8501/v1/health/node/$NODE_NAME
      while [[ "$(curl -sk -o /dev/null -w '%{http_code}\n' $consul_url)" != "200" ]] && [[ "$(curl -sk $consul_url)" = "[]" ]]; \
      do echo waiting for consul; sleep 5; done
    env:
      - name: CONSUL_NAME
        value: consul-consul-server-0.consul-consul-server.consul.svc.cluster.local
      - name: NODE_NAME
        valueFrom:
          fieldRef:
            fieldPath: spec.nodeName

extraVolumes:
  - name: elasticsearch-tls-cert
    secret:
      secretName: elasticsearch-cert
  - name: cacerts
    persistentVolumeClaim:
      claimName: cacerts
  - name: elasticsearch-data
    persistentVolumeClaim:
      claimName: elasticsearch-data
  - name: elasticsearch-backup
    persistentVolumeClaim:
      claimName: elasticsearch-backup

extraVolumeMounts:
  - name: elasticsearch-tls-cert
    readOnly: true
    mountPath: /usr/share/elasticsearch/config/elasticsearch.crt
    subPath: tls.crt
  - name: elasticsearch-tls-cert
    readOnly: true
    mountPath: /usr/share/elasticsearch/config/elasticsearch.key
    subPath: tls.key
  - name: cacerts
    mountPath: /var/local/cacerts.d
  - name: cacerts
    mountPath: /usr/share/elasticsearch/config/CA_cert.crt
    subPath: CA_cert.crt
  - name: elasticsearch-data
    mountPath: /usr/share/elasticsearch/data
  - name: elasticsearch-backup
    mountPath: /usr/backup

esJavaOpts: "-Xmx25g -Xms25g"
breakerLimit: "80%"

resources:
  requests:
    cpu: "1000m" 
    memory: "4Gi"
  limits:
    cpu: "8000m"
    memory: "32Gi"

initResources:
  limits:
    cpu: "200m"
    memory: "128Mi"
  requests:
    cpu: "200m"
    memory: "128Mi"

rbac: 
  create: true
  serviceAccountName: "elasticsearch"

podSecurityPolicy:
  create: true

podSecurityContext:
  fsGroup: 22020
  runAsUser: 22020

securityContext:
  runAsUser: 22020

persistence:
  enabled: false

esConfig:
  elasticsearch.yml: |
    path.repo: [ "/usr/backup" ]
    node.max_local_storage_nodes: 3
	
    xpack.security.enabled: true
    xpack.monitoring.collection.enabled: true

    ingest.geoip.downloader.enabled: false

    #index.merge.scheduler.max_thread_count: 1
    cluster.name: elk_cluster
    network.host: "elk-master.logging"
    xpack.license.self_generated.type: trial
    xpack:
      security:
        authc:
          realms:
            ldap:
              ldap1:
                order: 0
                metadata: cn
				
				# secure port 636 of ldap connection, no need to define below, even if you define, no behavior changed.
                url: "ldaps://ldap"
				
				# bind_dn user that ELK will use with LDAP. Then this process will search the user_search.filter option to find the DN of the userSurname (i.e. a ldap user that I'm using in Kibana UI) and use that ldap user's password to authenticate if Kibana UI user is allowed via LDAP auth or not.
                bind_dn: "uid=elk-bind,ou=ServiceAccounts,dc=infra.spacex"
				
				# The following setting 'secure_bind_password' is already shoved to elasticsearch-keystore. elasticsearch-keystore add xpack.security.authc.realms.ldap.ldap1.secure_bind_password and elasticsearch-keystore show <same path> spits the value for secure_bind_password used for the bind_dn uid=elk-bind.
				# -- I don't think ELK official docs for 7.x/8.x requires the following line or anything like that. Saw a blog with this, will try to see if it helps.
				#secure_bind_password: xpack.security.authc.realms.ldap.ldap1.secure_bind_password               
        
		        # NOTE: In my LDAP, elk-bind is setup using uid instead of cn. It's a service account and it'll be used to bind to LDAP first(using uid=elk-bind and secure_bind_password) i.e. es keystore's xpack.security.authc.realms.ldap.ldap1.secure_bind_password 
				# ----- then it'll search for the DN of the Kibana UI's "loigin user", and user Kibana login user's password to authenticate with LDAP.
                #bind_dn: "cn=elk-bind,ou=serviceaccounts,dc=infra.spacex"
                ssl.verification_mode: none

                #ssl.verification_mode: certificate

                user_search.base_dn: "ou=People,dc=infra.spacex" 
                user_search.filter: "(uid={0})"
                #user_search.filter: "(cn={0})"
                group_search.base_dn: "ou=Groups,dc=infra.spacex" 
				
				# As role mapping file is a file on the filesystem, we don't need to explicitly call REST API (mapping api) to map Kibana roles with LDAP groups. This file IIRC is read every 5 seconds(default) by ELK for mapping purposes.
                files:
                  role_mapping: "/usr/share/elasticsearch/config/role_mapping.yml"
                unmapped_groups_as_roles: false
            native:
              native1:
                order: 1
    xpack.security.transport.ssl.enabled: true
    #xpack.security.transport.ssl.verification_mode: none
    xpack.security.transport.ssl.verification_mode: certificate
    xpack.security.transport.ssl.key: /usr/share/elasticsearch/config/elasticsearch.key
    xpack.security.transport.ssl.certificate: /usr/share/elasticsearch/config/elasticsearch.crt
    xpack.security.transport.ssl.certificate_authorities: [ "/usr/share/elasticsearch/config/CA_cert.crt" ]
    xpack.security.audit.enabled: true
    xpack.security.audit.logfile.events.ignore_filters.test.users: [ "kibana" ]
	
	# For testing purpose, I'm not much worried about the following settings (secure / encryption settings for now). later.
	# ----------------------------------------------------------------------
    #xpack.security.http.ssl.enabled: true
    #xpack.security.http.ssl.verification_mode: none
    ##xpack.security.http.ssl.verification_mode: certificate
    #xpack.security.http.ssl.key: /usr/share/elasticsearch/config/elasticsearch.key
    #xpack.security.http.ssl.certificate: /usr/share/elasticsearch/config/elasticsearch.crt
    #xpack.security.http.ssl.certificate_authorities: [ "/usr/share/elasticsearch/config/CA_cert.crt" ]
    #xpack.security.authc.token.enabled: true

    logger.org.elasticsearch: DEBUG
    logger.org.elasticsearch.http: DEBUG
    logger.org.elasticsearch.transport: DEBUG

  roles.yml: |
    ams:
      cluster: [ "manage_index_templates", "monitor" ]
      indices:
        - names: [ "metricbeat-*", "syslog-*" ]
          privileges: [ 'read' ]
    beat_reader:
      indices: 
        - names: [ "metricbeat-*" ]
          privileges: ["read", "view_index_metadata"]
    beat_writer:
      cluster: [ "manage_index_templates","monitor","manage_ilm" ]
      indices:
        - names: [ "metricbeat-*" ]
          privileges: ["read","write","delete","create_index","manage","manage_ilm"]

    # Kibana 7.16.3 missing manage permissions on indexes
    kibana_supp:
      cluster: [ "all" ]
      indices:
        - names: [ "*" ]
          privileges: ["manage"]
    sec_admin:
      cluster: [ "manage_index_templates", "monitor" ] 
      indices:
        - names: [ 'metricbeat-*', 'syslog-*' ]
          privileges: [ 'read' ]

  role_mapping.yml: |
    ams:
      - "cn=sys_integration,ou=Groups,dc=infra.spacex"
    kibana_admin:
      - "cn=system_admin,ou=Groups,dc=infra.spacex"
    kibana_system:
      - "cn=system_admin,ou=Groups,dc=infra.spacex"
    kibana_user:
      - "cn=ldap_admins,ou=Groups,dc=infra.spacex"
      - "cn=system_admin,ou=Groups,dc=infra.spacex"
    reporting_user:
      - "cn=sys_integration,ou=Groups,dc=infra.spacex"
    superuser:
      - "cn=system_admin,ou=Groups,dc=infra.spacex"
      - "uid=nklbobbyb,ou=people,dc=infra.spacex"

image: artifactory:8443/docker/elastic/elasticsearch

service:
  type: LoadBalancer
  annotations:
    metallb.universe.tf/address-pool: default
    metallb.universe.tf/allow-shared-ip: elasticsearch
  loadBalancerIP: 10.20.30.40

nodeSelector:
  com.company.host.hostType: deployhosts

ingress:
  enabled: true
  path: /
  hosts:
    - elasticsearch.domain.secure
    - elasticsearch
  tls:
    - secretName: elasticsearch-cert
      hosts:
        - elasticsearch.domain.secure
        - elasticsearch

# Do not need a high successThreshold because lifecycle won't work unless service is listening
readinessProbe:
  successThreshold: 1

# Must be yellow or green to be ready (yellow meaning not all shared are assigned/active)
clusterHealthCheckParams: "wait_for_status=yellow&timeout=1s"


## POST START HOOK is where I'm using shell code to add secure_bind_password to elasticsearch-keystore.
lifecycle:
  postStart:
    exec:
      command:
        - bash
        - -c
        - |
          #!/bin/bash
          # Update passwords based on what is in vault
          source /usr/share/elasticsearch/bin/elasticsearch-env-from-file
          CREDS="-u $ELASTIC_USERNAME:$ELASTIC_PASSWORD"

          ES_URL=http://localhost:9200
          while [[ "$(curl -s -o /dev/null -w '%{http_code}\n' $CREDS $ES_URL)" != "200" ]]; do sleep 2; done
          curl -k $CREDS -XPOST "${ES_URL}/_security/user/healthcheck" -H 'Content-Type: application/json' -d '{ "password":"'healthcheck'", "roles" : ["remote_monitoring_collector"] }'

          # Set bind_dn(ldap) password (for uid=elk-bind) as secure_bind_password in ElasticSearch Keystore using elk-bind secret
          # --------------------------
          # https://www.elastic.co/guide/en/elasticsearch/reference/current/ldap-realm.html#mapping-roles-ldap
          #
          if test -f "/vault/secrets/elk-bind"; then
            echo "Injecting bind_dn password as secure_bind_password (in ElasticSearch Keystore) using elk-bind secret (from Vault)"
            ELK_BIND_PASSWORD="$(cat /vault/secrets/elk-bind)"
            echo "${ELK_BIND_PASSWORD}" | elasticsearch-keystore add xpack.security.authc.realms.ldap.ldap1.secure_bind_password
          fi

          # Create a different kibana user because the default one in 7.16.3 is broken (permissions issues)
          # May be able to be removed at a newer version (replace all instances of kibana_mgmr)
          if test -f "/vault/secrets/kibana"; then
            echo "Setting kibana credentials"
            KIB_PASSWORD=$(cat /vault/secrets/kibana)
            curl -k $CREDS -XPOST "${ES_URL}/_security/user/kibana_mgmr" -H 'Content-Type: application/json' -d '{ "password":"'$KIB_PASSWORD'", "roles" : ["kibana_system", "kibana_supp"] }'
          fi

          # Sleeping for a period of time before gathering indices to let elasticsearch load index names from filesystem
          # Currently do not know of an endpoint that we can use to check for index loading.
          sleep 60

          
		  
          # IGNORE ----- All code listed below.
          # some curl etc command below.

Main / relevant code in Kibana .yml looks like:

elasticsearchHosts: "http://elk-master:9200"
#... more stuff ...generic stuff

kibanaConfig:
  kibana.yml: |-
    server.name: kibana
    server.host: 0.0.0.0
    server.publicBaseUrl: https://kibana.domain.secure
    xpack.monitoring.ui.container.elasticsearch.enabled: true

    #elasticsearch.hosts: ["http://elk-master.logging:9200"]
    elasticsearch.hosts: ["http://elk-master:9200"]

    logging.root.level: debug
    elasticsearch.ssl.verificationMode: none

    #elasticsearch.ssl.verificationMode: certificate
    server.ssl.clientAuthentication: none


resources:
  requests:
    cpu: "1000m"
    memory: "2Gi"
  limits:
    cpu: "1000m"
    memory: "2Gi"

podSecurityContext:
  fsGroup: 22060

securityContext:
  runAsUser: 22060

serviceAccount: "kibana"

extraVolumes:
  - name: keystore
    emptyDir: {}

extraVolumeMounts:
  - name: keystore
    mountPath: /usr/share/kibana/config/kibana.keystore
    subPath: kibana.keystore

service:
  type: LoadBalancer
  loadBalancerIP: 10.20.30.40

ingress:
  enabled: true
  path: /
  hosts:
    - kibana.domain.secure
    - kibana
  tls:
    - secretName: kibana-cert
      hosts:
        - kibana.domain.secure
        - kibana

nodeSelector:
  com.company.host.hostType: deployhosts

When I login using nklbobbyb (in Kibana UI), I'm seeing the following in elk-master-0 pod's log. i.e.

$ kubectl logs -n logging elk-master-0 -c elasticsearch -f | egrep -i "ldap|nklbobbyb|auth"

ERROR mesg is listed below:

{"type": "server", "timestamp": "2023-10-02T19:52:36,018Z", "level": "WARN", "component": "o.e.x.s.a.l.s.LdapUtils", "cluster.name": "elk_cluster", "node.name": "elk-master-0", "message": "Failed to obtain LDAP connection from pool - LDAPException(resultCode=89 (parameter error), diagnosticMessage='Simple bind operations are not allowed to contain a bind DN without a password.', ldapSDKVersion=4.0.8, revision=28812)", "cluster.uuid": "31tgxJzTSdGRbn9PbWDt7K", "node.id": "lOYvRY7zSNW7ZMwewL9Vzg"  }
{"type": "server", "timestamp": "2023-10-02T19:52:36,018Z", "level": "WARN", "component": "o.e.x.s.a.RealmsAuthenticator", "cluster.name": "elk_cluster", "node.name": "elk-master-0", "message": "Authentication to realm ldap1 failed - authenticate failed (Caused by LDAPException(resultCode=89 (parameter error), diagnosticMessage='Simple bind operations are not allowed to contain a bind DN without a password.', ldapSDKVersion=4.0.8, revision=28812))", "cluster.uuid": "31tgxJzTSdGRbn9PbWDt7K", "node.id": "lOYvRY7zSNW7ZMwewL9Vzg"  }

Kibana logs shows:

$ kubectl logs -n logging kibana-kibana-5645ccf7dd-kx7jh -c kibana -f  | egrep -i "ldap|asangal|auth"

---
{"type":"log","@timestamp":"2023-10-02T21:22:32+00:00","tags":["info","plugins","security","authentication"],"pid":8,"message":"Authentication attempt failed: {\"error\":{\"root_cause\":[{\"type\":\"security_exception\",\"reason\":\"unable to authenticate user [nklbobbyb] for REST request [/_security/_authenticate]\",\"header\":{\"WWW-Authenticate\":\"Basic realm=\\\"security\\\" charset=\\\"UTF-8\\\"\"}}],\"type\":\"security_exception\",\"reason\":\"unable to authenticate user [nklbobbyb] for REST request [/_security/_authenticate]\",\"header\":{\"WWW-Authenticate\":\"Basic realm=\\\"security\\\" charset=\\\"UTF-8\\\"\"}},\"status\":401}"}
===

What I could be missing to get this simple LDAP working?
It's very odd that in Kibana UI, when I log in as "elastic" (which has all the superuser powers), there's NO LDAP section under Security section, i.e. so a user can see the settings within LDAP in a web browser..

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.