Kibana Not Able to Connect to Elastic Master in Kubernetes from elastic-helm

I am installing Elasticsearch with xpack security from kubernetes - using hel chart from elastic

But kibana is not able to communicate to Elasticsearch Cluster

{"type":"log","@timestamp":"2019-06-14T05:07:41Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"No living connections"}
{"type":"log","@timestamp":"2019-06-14T05:07:41Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: https://elasticsearch-master:9200/"}
{"type":"log","@timestamp":"2019-06-14T05:07:41Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"No living connections"}
{"type":"log","@timestamp":"2019-06-14T05:07:41Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: https://elasticsearch-master
:9200/"}
{"type":"log","@timestamp":"2019-06-14T05:07:41Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"No living connections"}
{"type":"log","@timestamp":"2019-06-14T05:07:41Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: https://elasticsearch-master:9200/"}
{"type":"log","@timestamp":"2019-06-14T05:07:41Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"No living connections"}
{"type":"log","@timestamp":"2019-06-14T05:07:41Z","tags":["warning","task_manager"],"pid":1,"message":"PollError No Living connections"}
{"type":"log","@timestamp":"2019-06-14T05:07:43Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: https://elasticsearch-master:9200/"}
{"type":"log","@timestamp":"2019-06-14T05:07:43Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"No living connections"}
{"type":"log","@timestamp":"2019-06-14T05:07:44Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: https://elasticsearch-master:9200/"}
{"type":"log","@timestamp":"2019-06-14T05:07:44Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"No living connections"}
{"type":"log","@timestamp":"2019-06-14T05:07:44Z","tags":["warning","task_manager"],"pid":1,"message":"PollError No Living connections"}
{"type":"log","@timestamp":"2019-06-14T05:07:45Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: https://elasticsearch-master:9200/"}
{"type":"log","@timestamp":"2019-06-14T05:07:45Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"No living connections"}
{"type":"log","@timestamp":"2019-06-14T05:07:47Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: https://elasticsearch-master:9200/"}
{"type":"log","@timestamp":"2019-06-14T05:07:47Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"No living connections"}
{"type":"log","@timestamp":"2019-06-14T05:07:47Z","tags":["warning","task_manager"],"pid":1,"message":"PollError No Living connections"}
{"type":"log","@timestamp":"2019-06-14T05:07:48Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: https://elasticsearch-master:9200/"}
{"type":"log","@timestamp":"2019-06-14T05:07:48Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"No living connections"}
{"type":"log","@timestamp":"2019-06-14T05:07:50Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: https://elasticsearch-master:9200/"}
{"type":"log","@timestamp":"2019-06-14T05:07:50Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"No living connections"}
{"type":"log","@timestamp":"2019-06-14T05:07:50Z","tags":["warning","task_manager"],"pid":1,"message":"PollError No Living connections"}
{"type":"log","@timestamp":"2019-06-14T05:07:50Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: https://elasticsearch-master:9200/"}
{"type":"log","@timestamp":"2019-06-14T05:07:50Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"No living connections"}

I tried with https and http but no avail .

The Configuration for Kibana is as follows in values.yml file

elasticsearchHosts: "https://elasticsearch-master:9200"
protocol: https
readinessProbe:
failureThreshold: 3
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 3
timeoutSeconds: 5
kibanaConfig:
kibana.yml: |
server.host: "0.0.0.0"
extraEnvs:

  • name: 'ELASTICSEARCH_USERNAME'
    valueFrom:
    secretKeyRef:
    name: elastic-credentials
    key: username
  • name: 'ELASTICSEARCH_PASSWORD'
    valueFrom:
    secretKeyRef:
    name: elastic-credentials
    key: password
    image: "docker.elastic.co/kibana/kibana"
    secretMounts:
  • name: elastic-certificates
    secretName: elastic-certificates
    path: /usr/share/kibana/config/certs
    imageTag: "6.8.0"
    imagePullPolicy: "IfNotPresent"
    resources:
    limits:
    cpu: 500m
    memory: 2048Mi
    requests:
    cpu: 300m
    memory: 1024Mi
    persistentVolumeClaim:
    storageClass: rkcph-flinkes-kibana
    size: 5Gi
    service:
    externalPort: 5601

I tried with http, https but no avail . What am i missing here .

Hi!

It seems like you are mounting your certificates into the Kibana container but you aren't referencing them anywhere in your kibana.yml configuration. If you take a look at the Kibana security example you will see what is missing.

Could you also give me the output of helm get kibana and helm get elasticsearch (or whatever your release names are). And can you make sure to add everything in quotes to preserve the indentation too? Then it will look like this:

resources:
  limits:
    cpu: 500m
    memory: 2048Mi
  requests:
    cpu: 300m
    memory: 1024Mi

Could you also test whether or not you can connect to elasticsearch from the Kibana container? You can run this command in your container:

curl -k -u $ELASTICSEARCH_USERNAME:$ELASTICSEARCH_PASSWORD https://elasticsearch-master:9200
---
clusterName: "clustername"
nodeGroup: "master"
replicas: 3
roles:
  master: "true"
  ingest: "true"
  data: "false"
minimumMasterNodes: 2
esMajorVersion: 6
protocol: https
esConfig:
  elasticsearch.yml: |
    xpack.security.audit.enabled: true
    xpack.security.enabled: true
    xpack.security.transport.ssl.enabled: true
    xpack.security.transport.ssl.verification_mode: certificate
    xpack.security.transport.ssl.keystore.path: /usr/share/elasticsearch/config/certs/elastic-certificates.p12
    xpack.security.transport.ssl.truststore.path: /usr/share/elasticsearch/config/certs/elastic-certificates.p12
    xpack.security.http.ssl.enabled: true
    xpack.security.http.ssl.truststore.path: /usr/share/elasticsearch/config/certs/elastic-certificates.p12
    xpack.security.http.ssl.keystore.path: /usr/share/elasticsearch/config/certs/elastic-certificates.p12
extraEnvs:
  - name: ELASTIC_PASSWORD
    valueFrom:
      secretKeyRef:
        name: elastic-credentials
        key: password
  - name: ELASTIC_USERNAME
    valueFrom:
      secretKeyRef:
        name: elastic-credentials
        key: username
secretMounts:
  - name: elastic-certificates
    secretName: elastic-certificates
    path: /usr/share/elasticsearch/config/certs
image: "docker.elastic.co/elasticsearch/elasticsearch"
imageTag: "6.8.0"
imagePullPolicy: "IfNotPresent"
resources:
  limits:
    cpu: "2"
    memory: "6G"
  requests:
    cpu: "1"
    memory: "4G"
volumeClaimTemplate:
  accessModes: [ "ReadWriteOnce" ]
  storageClassName: "rkcph-flinkes-master"
  resources:
    requPreformatted textests:
      storage: 30Gi
persistence:
  enabled: true

helm get esmaster

REVISION: 1
RELEASED: Tue Jul  2 19:35:35 2019
CHART: elasticsearch-7.1.1
USER-SUPPLIED VALUES:
clusterName: clustername
esConfig:
  elasticsearch.yml: |
    xpack.security.audit.enabled: true
    xpack.security.enabled: true
    xpack.security.transport.ssl.enabled: true
    xpack.security.transport.ssl.verification_mode: certificate
    xpack.security.transport.ssl.keystore.path: /usr/share/elasticsearch/config/certs/elastic-certificates.p12
    xpack.security.transport.ssl.truststore.path: /usr/share/elasticsearch/config/certs/elastic-certificates.p12
    xpack.security.http.ssl.enabled: true
    xpack.security.http.ssl.truststore.path: /usr/share/elasticsearch/config/certs/elastic-certificates.p12
    xpack.security.http.ssl.keystore.path: /usr/share/elasticsearch/config/certs/elastic-certificates.p12
esMajorVersion: 6
extraEnvs:
- name: ELASTIC_PASSWORD
  valueFrom:
    secretKeyRef:
      key: password
      name: elastic-credentials
- name: ELASTIC_USERNAME
  valueFrom:
    secretKeyRef:
      key: username
      name: elastic-credentials
image: docker.elastic.co/elasticsearch/elasticsearch
imagePullPolicy: IfNotPresent
imageTag: 6.8.0
minimumMasterNodes: 2
nodeGroup: master
persistence:
  enabled: true
protocol: https
replicas: 3
resources:
  limits:
    cpu: "2"
    memory: 6G
  requests:
    cpu: "1"
    memory: 4G
roles:
  data: "false"
  ingest: "true"
  master: "true"
secretMounts:
- name: elastic-certificates
  path: /usr/share/elasticsearch/config/certs
  secretName: elastic-certificates
volumeClaimTemplate:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 30Gi
  storageClassName: rkcph-flinkes-master

COMPUTED VALUES:
antiAffinity: hard
antiAffinityTopologyKey: kubernetes.io/hostname
clusterHealthCheckParams: wait_for_status=green&timeout=1s
clusterName: clustername
esConfig:
  elasticsearch.yml: |
    xpack.security.audit.enabled: true
    xpack.security.enabled: true
    xpack.security.transport.ssl.enabled: true
    xpack.security.transport.ssl.verification_mode: certificate
    xpack.security.transport.ssl.keystore.path: /usr/share/elasticsearch/config/certs/elastic-certificates.p12
    xpack.security.transport.ssl.truststore.path: /usr/share/elasticsearch/config/certs/elastic-certificates.p12
    xpack.security.http.ssl.enabled: true
    xpack.security.http.ssl.truststore.path: /usr/share/elasticsearch/config/certs/elastic-certificates.p12
    xpack.security.http.ssl.keystore.path: /usr/share/elasticsearch/config/certs/elastic-certificates.p12
esJavaOpts: -Xmx1g -Xms1g
esMajorVersion: 6
extraEnvs:
- name: ELASTIC_PASSWORD
  valueFrom:
    secretKeyRef:
      key: password
      name: elastic-credentials
- name: ELASTIC_USERNAME
  valueFrom:
    secretKeyRef:
      key: username
      name: elastic-credentials
extraInitContainers: []
extraVolumeMounts: []
extraVolumes: []
fsGroup: 1000
fullnameOverride: ""
httpPort: 9200
image: docker.elastic.co/elasticsearch/elasticsearch
imagePullPolicy: IfNotPresent
imagePullSecrets: []
imageTag: 6.8.0
ingress:
  annotations: {}
  enabled: false
  hosts:
  - chart-example.local
  path: /
  tls: []
initResources: {}
masterService: ""
maxUnavailable: 1
minimumMasterNodes: 2
nameOverride: ""
networkHost: 0.0.0.0
nodeAffinity: {}
nodeGroup: master
nodeSelector: {}
persistence:
  annotations: {}
  enabled: true
podAnnotations: {}
podManagementPolicy: Parallel
priorityClassName: ""
protocol: https
readinessProbe:
  failureThreshold: 3
  initialDelaySeconds: 10
  periodSeconds: 10
  successThreshold: 3
  timeoutSeconds: 5
replicas: 3
resources:
  limits:
    cpu: "2"
    memory: 6G
  requests:
    cpu: "1"
    memory: 4G
roles:
  data: "false"
  ingest: "true"
  master: "true"
secretMounts:
- name: elastic-certificates
  path: /usr/share/elasticsearch/config/certs
  secretName: elastic-certificates
sysctlVmMaxMapCount: 262144
terminationGracePeriod: 120
tolerations: []
transportPort: 9300
updateStrategy: RollingUpdate
volumeClaimTemplate:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 30Gi
  storageClassName: rkcph-flinkes-master
 HOOKS:
    ---
    # esmaster-evvyb-test
    apiVersion: v1
    kind: Pod
    metadata:
      name: "esmaster-evvyb-test"
      annotations:
        "helm.sh/hook": test-success
    spec:
      containers:
      - name: "esmaster-qnhll-test"
        image: "docker.elastic.co/elasticsearch/elasticsearch:6.8.0"
        command:
          - "sh"
          - "-c"
          - |
            #!/usr/bin/env bash -e
            curl -XGET --fail 'clustername-master:9200/_cluster/health?wait_for_status=green&timeout=1s'
      restartPolicy: Never
    MANIFEST:

    ---
    # Source: elasticsearch/templates/configmap.yaml
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: clustername-master-config
      labels:
        heritage: "Tiller"
        release: "esmaster"
        chart: "elasticsearch-7.1.1"
        app: "clustername-master"
    data:
      elasticsearch.yml: |
        xpack.security.audit.enabled: true
        xpack.security.enabled: true
        xpack.security.transport.ssl.enabled: true
        xpack.security.transport.ssl.verification_mode: certificate
        xpack.security.transport.ssl.keystore.path: /usr/share/elasticsearch/config/certs/elastic-certificates.p12
        xpack.security.transport.ssl.truststore.path: /usr/share/elasticsearch/config/certs/elastic-certificates.p12
        xpack.security.http.ssl.enabled: true
        xpack.security.http.ssl.truststore.path: /usr/share/elasticsearch/config/certs/elastic-certificates.p12
        xpack.security.http.ssl.keystore.path: /usr/share/elasticsearch/config/certs/elastic-certificates.p12
    ---
    # Source: elasticsearch/templates/service.yaml
    kind: Service
    apiVersion: v1
    metadata:
      name: clustername-master
    spec:
      selector:
        heritage: "Tiller"
        release: "esmaster"
        chart: "elasticsearch-7.1.1"
        app: "clustername-master"
      ports:
      - name: http
        protocol: TCP
        port: 9200
      - name: transport
        protocol: TCP
        port: 9300
    ---
    # Source: elasticsearch/templates/service.yaml
    kind: Service
    apiVersion: v1
    metadata:
      name: clustername-master-headless
      labels:
        heritage: "Tiller"
        release: "esmaster"
        chart: "elasticsearch-7.1.1"
        app: "clustername-master"
      annotations:
        # Create endpoints also if the related pod isn't ready
        service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
    spec:
      clusterIP: None # This is needed for statefulset hostnames like elasticsearch-0 to resolve
      selector:
        app: "clustername-master"
      ports:
      - name: http
        port: 9200
      - name: transport
        port: 9300
    ---
  

  # Source: elasticsearch/templates/statefulset.yaml
    apiVersion: apps/v1beta1
    kind: StatefulSet
    metadata:
      name: clustername-master
      labels:
        heritage: "Tiller"
        release: "esmaster"
        chart: "elasticsearch-7.1.1"
        app: "clustername-master"
    spec:
      serviceName: clustername-master-headless
      selector:
        matchLabels:
          app: "clustername-master"
      replicas: 3
      podManagementPolicy: Parallel
      updateStrategy:
        type: RollingUpdate
      volumeClaimTemplates:
      - metadata:
          name: clustername-master
        spec:
          accessModes:
          - ReadWriteOnce
          resources:
            requests:
              storage: 30Gi
          storageClassName: rkcph-flinkes-master

      template:
        metadata:
          name: "clustername-master"
          labels:
            heritage: "Tiller"
            release: "esmaster"
            chart: "elasticsearch-7.1.1"
            app: "clustername-master"
          annotations:

            configchecksum: 019c7183b5d65c4b883e7ce0d2a8a85a3c5576de6c711a4c324899e727c3dd6
        spec:
          securityContext:
            fsGroup: 1000
          affinity:
            podAntiAffinity:
              requiredDuringSchedulingIgnoredDuringExecution:
              - labelSelector:
                  matchExpressions:
                  - key: app
                    operator: In
                    values:
                    - "clustername-master"
                topologyKey: kubernetes.io/hostname
          terminationGracePeriodSeconds: 120
          volumes:
            - name: elastic-certificates
              secret:
                secretName: elastic-certificates
            - name: esconfig
              configMap:
                name: clustername-master-config
          initContainers:
          - name: configure-sysctl
            securityContext:
              runAsUser: 0
              privileged: true
            image: "docker.elastic.co/elasticsearch/elasticsearch:6.8.0"
            command: ["sysctl", "-w", "vm.max_map_count=262144"]
            resources:
              {}

          containers:
          - name: "elasticsearch"
            image: "docker.elastic.co/elasticsearch/elasticsearch:6.8.0"
            imagePullPolicy: "IfNotPresent"
            readinessProbe:
              failureThreshold: 3
              initialDelaySeconds: 10
              periodSeconds: 10
              successThreshold: 3
              timeoutSeconds: 5
- -c
              - |
                #!/usr/bin/env bash -e
                # If the node is starting up wait for the cluster to be ready (request params: 'wait_for_status=green&timeout=1s' )
                # Once it has started only check that the node itself is responding
                START_FILE=/tmp/.es_start_file

                http () {
                    local path="${1}"
                    if [ -n "${ELASTIC_USERNAME}" ] && [ -n "${ELASTIC_PASSWORD}" ]; then
                      BASIC_AUTH="-u ${ELASTIC_USERNAME}:${ELASTIC_PASSWORD}"
                    else
                      BASIC_AUTH=''
                    fi
                    curl -XGET -s -k --fail ${BASIC_AUTH} https://127.0.0.1:9200${path}
                }

                if [ -f "${START_FILE}" ]; then
                    echo 'Elasticsearch is already running, lets check the node is healthy'
                    http "/"
                else
                    echo 'Waiting for elasticsearch cluster to become cluster to be ready (request params: "wait_for_status=green&timeout=1s" )'
                    if http "/_cluster/health?wait_for_status=green&timeout=1s" ; then
                        touch ${START_FILE}
                        exit 0
                    else
                        echo 'Cluster is not yet ready (request params: "wait_for_status=green&timeout=1s" )'
                        exit 1
                    fi
                fi
        ports:
        - name: http
          containerPort: 9200
        - name: transport
          containerPort: 9300
        resources:
          limits:
            cpu: "2"
            memory: 6G
          requests:
            cpu: "1"
            memory: 4G

        env:
          - name: node.name
            valueFrom:
              fieldRef:
                fieldPath: metadata.name
          - name: discovery.zen.minimum_master_nodes
            value: "2"
          - name: discovery.zen.ping.unicast.hosts
            value: "clustername-master-headless"
          - name: cluster.name
            value: "clustername"
          - name: network.host
            value: "0.0.0.0"
          - name: ES_JAVA_OPTS
            value: "-Xmx1g -Xms1g"
          - name: node.data
            value: "false"
          - name: node.ingest
            value: "true"
          - name: node.master
            value: "true"
          - name: ELASTIC_PASSWORD
            valueFrom:
              secretKeyRef:
                key: password
                name: elastic-credentials
          - name: ELASTIC_USERNAME
            valueFrom:
              secretKeyRef:
                key: username
                name: elastic-credentials

        volumeMounts:
          - name: "clustername-master"
            mountPath: /usr/share/elasticsearch/data
          - name: elastic-certificates
            mountPath: /usr/share/elasticsearch/config/certs
          - name: esconfig
            mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
            subPath: elasticsearch.yml
      # This sidecar will prevent slow master re-election
      # https://github.com/elastic/helm-charts/issues/63
      - name: elasticsearch-master-graceful-termination-handler
        image: "docker.elastic.co/elasticsearch/elasticsearch:6.8.0"
        imagePullPolicy: "IfNotPresent"
        command:
        - "sh"
        - -c
        - |
          #!/usr/bin/env bash
          set -eo pipefail

          http () {
              local path="${1}"
              if [ -n "${ELASTIC_USERNAME}" ] && [ -n "${ELASTIC_PASSWORD}" ]; then
                BASIC_AUTH="-u ${ELASTIC_USERNAME}:${ELASTIC_PASSWORD}"
              else
                BASIC_AUTH=''
              fi
              curl -XGET -s -k --fail ${BASIC_AUTH} https://clustername-master:9200${path}
          }

          cleanup () {
            while true ; do
              local master="$(http "/_cat/master?h=node")"
              if [[ $master == "clustername-master"* && $master != "${NODE_NAME}" ]]; then
                echo "This node is not master."
                break
              fi
              echo "This node is still master, waiting gracefully for it to step down"
              sleep 1
            done

            exit 0
          }

          trap cleanup SIGTERM

          sleep infinity &
          wait $!
        env:
          - name: NODE_NAME
            valueFrom:
              fieldRef:
                fieldPath: metadata.name
          - name: ELASTIC_PASSWORD
            valueFrom:
              secretKeyRef:
                key: password
                name: elastic-credentials
          - name: ELASTIC_USERNAME
            valueFrom:
              secretKeyRef:
                key: username
                name: elastic-credentials
---
# Source: elasticsearch/templates/poddisruptionbudget.yaml
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
  name: "clustername-master-pdb"
spec:
  maxUnavailable: 1
  selector:
    matchLabels:
      app: "clustername-master"

sorry to add multiple posts .

I have added this file .

With all the outpit you requested . But i am still getting the same error

So I have Drilled Down to Below Steps .

I removed https from elasticsearch and only tls was enabled , Now When i started kibana with elasticsearch with http and ssl: flase it was able to communicate .

I think there is some issue with https .

I am going to recreate the steps and share all possible iterations.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.