Elasticsearch-Kibana- Deployment issues in Anthos on-prem K8s cluster

Hi Team,

We have Anthos on-prem K8s, where we need to deploy Elasticsearch and Kibana, I used the attached .yaml file but it's not working..
I have two DNS created for Elasticsearch and Kibana something like this elasticsearch.uat.domain.com & kibana.uat.domain.com also we have two different IPs registered with these DNS, and we got the certificate for these two DNSs.. (same certificate for both DNS)..

Kind.elasticsearch.yaml

`apiVersion: elasticsearch.k8s.elasic.co/v1
kind: Elasticsearch
metadata:
name: elasticsearch
spec:
version: 8.9.1
http:
service:
spec:
type: LoadBalancer
loadBalancerIP: {ElasticLBIP}
ports:
- name: https
port: 443
nodeSets:

  • name: master-nodes
    count: 3
    config:
    node.roles: ["master"]
    podTemplate:
    spec:
    initContainers:
    - name: sysctl
    securityContext:
    privileged: true
    runAsUser: 0
    command: ['sh', '-c', 'sysctl -w vm.max_map_count=262144']
    env:
    - name: ES_JAVA_OPTS
    value: -Xms2g -Xmx2g
    - name: bootstrap.memory_lock
    value: 'true'
    volumes:
    - name: elasticsearch-tls-secret
    secret:
    secretName: esuatcert
    containers:
    - name: elasticsearch
    image: docker.elastic.co/elasticsearch/elasticsearch:8.9.1
    ports:
    - containerPort: 443
    volumeMounts:
    - name: elasticsearch-tls-secret
    mountPath: /usr/share/elasticsearch/config/ssl
    readOnly: true
    volumeClaimTemplates:
    • metadata:
      name: elasticsearch-data
      spec:
      accessModes:
      - ReadWriteOnce
      resources:
      requests:
      storage: 2Gi
      cpu: 500m
      memory: 1Gi
      limits:
      cpu: 500m
      memory: 1Gi
      storageClassName: standard
  • name: data-nodes
    count: 3
    config:
    node.roles: ["data"]
    podTemplate:
    spec:
    initContainers:
    - name: sysctl
    securityContext:
    privileged: true
    runAsUser: 0
    command: ['sh', '-c', 'sysctl -w vm.max_map_count=262144']
    env:
    - name: ES_JAVA_OPTS
    value: -Xms2g -Xmx2g
    - name: bootstrap.memory_lock
    value: 'true'
    volumeClaimTemplates:
    • metadata:
      name: elasticsearch-data
      spec:
      accessModes:
      - ReadWriteOnce
      resources:
      requests:
      storage: 5Gi
      cpu: 2000m
      memory: 4Gi
      limits:
      cpu: 2000m
      memory: 4Gi
      storageClassName: standard`

kind.kibana.yaml
apiVersion: kibana.k8s.elastic.co/v1 kind: Kibana metadata: name: elasticsearch namespace: uat-elasticsearch spec: version: 8.9.1 http: tls: selfSignedCertificate: disabled: true certificate: secretName: esuat-cert service: spec: type: LoadBalancer loadBalancerIP: {KibanaLBIP} ports: - name: https port: 443 count: 2 elasticsearchRef: name: elasticsearch

Can someone help me what mistakes we are making here ? ,

Kind Regards,
Esakki

Hello @Sunile_Manjee I saw in one of the case that you are an expert in ECK, could you please help with making this deployment success, I think we messed up the yaml s by referring different examples.. really not getting an idea how to make this work.. please do the needful

many thanks in advance.

Kind Regards,
Esakki

@Esakki couple things I need from you to try and help debug this

  1. Have you deployed the operator?
    kubectl get pods -n elastic-system
  2. Can you elaborate on what happens when you deploy the YAMLs? Do you see any pods initializing? Do you encounter any errors during deployment?
    kubectl get pods -l elasticsearch.k8s.elastic.co/cluster-name= elasticsearch
  3. Have you set up a baseline working example without using custom DNS? I often do this before further customizing the YAMLs. ElasticKonductor provides great examples. Additionally, cloud-on-eck has numerous examples.
  4. Please post your YAMLs using the "preformatted text" option. Without it, it's challenging to test with your YAMLs.

@Sunile_Manjee Many thanks for your response here are the requested details,

apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
  name: elasticsearch
spec:
  version: 8.9.1
  transport:
    tls:
      subjectAltNames:
      - dns: elasticsearch.appsuat.my.domain
  http:
    service:
      spec:
        type: LoadBalancer
        loadBalancerIP: {ElasticSearch LB-IP}
  nodeSets:
  - name: master-nodes
    count: 3
    config:
      node.roles: ["master"]
    podTemplate:
      spec:
        initContainers:
        - name: sysctl
          securityContext:
            privileged: true
            runAsUser: 0
          command: ['sh', '-c', 'sysctl -w vm.max_map_count=262144']
          env:
            - name: ES_JAVA_OPTS
              value: -Xms2g -Xmx2g
            - name: bootstrap.memory_lock
              value: 'true'
    volumeClaimTemplates:
      - metadata:
          name: elasticsearch-data
        spec:
          accessModes:
            - ReadWriteOnce
          resources:
            requests:
              storage: 2Gi
              cpu: 500m
              memory: 1Gi
            limits:
              cpu: 500m
              memory: 1Gi
          storageClassName: standard
  - name: data-nodes
    count: 3
    config:
      node.roles: ["data"]
    podTemplate:
      spec:
        initContainers:
        - name: sysctl
          securityContext:
            privileged: true
            runAsUser: 0
          command: ['sh', '-c', 'sysctl -w vm.max_map_count=262144']
          env:
            - name: ES_JAVA_OPTS
              value: -Xms2g -Xmx2g
            - name: bootstrap.memory_lock
              value: 'true'
    volumeClaimTemplates:
      - metadata:
          name: elasticsearch-data
        spec:
          accessModes:
            - ReadWriteOnce
          resources:
            requests:
              storage: 5Gi
              cpu: 2000m
              memory: 4Gi
            limits:
              cpu: 2000m
              memory: 4Gi
          storageClassName: standard
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
  name: elasticsearch
spec:
  version: 8.9.1
  count: 2
  elasticsearchRef:
    name: elasticsearch
  http:
    service:
      spec:
        type: LoadBalancer
        loadBalancerIP: {Kibana LB-IP}
    tls:
      certificate:
        secretName: esuat-cert
  podTemplate:
    spec:
      containers:
        - name: kibana
          resources:
            requests:
              memory: 2Gi
              cpu: 1000m
            limits:
              memory: 2Gi
              cpu: 1000m
          env:
            - name: ES_JAVA_OPTS
              value: "-Xms1g -Xmx1g"

Kind Regards,
Esakki

In your Kibana manifest why are you {Kibana LB-IP}? If you don't have a specific IP, you can remove this line altogether, and Kubernetes will automatically assign one for you. Maybe this is what you intended, a specific IP.

You also don't need

          env:
            - name: ES_JAVA_OPTS
              value: "-Xms1g -Xmx1g"

Kibana will take up half the available to it on the pod.

Your pods are running, that's good. Are you able to reach Kibana via the LB service deployed? https://elasticsearch-kb-http-External-IP>:443. Can you please verify the LB service is serving Kibana pods?

Fetch the details of the service to see which label selector it uses.
kubectl describe svc elasticsearch-kb-http

Then run
kubectl get pods -l key=value
Replace key=value with the actual label selector you found in the previous step. If there are multiple labels, you can comma-separate them like -l key1=value1,key2=value2.

This command will show you the list of pods that the elasticsearch-kb-http service is targeting.

Can you please elaborate on what is the issue you are experiencing? I am guessing that you are not able to reach Kibana. If that is accurate, what exactly is error.

@Sunile_Manjee thanks for your response, please find the details below.

  1. In your Kibana manifest why are you {Kibana LB-IP} ? - Yes, I have a separate loadbalancer IP that I have configured in kibana svc, type:LoadBalancer and the IP as well.

  2. I'm unable to run this https://elasticsearch-kb-http-External-IP>:443 . browser and in local as curl cmd, here I replaced the external IP with my kibana loadbalancer IP, that I referred in in the above point#1.

  3. Kubectl describe svc elasticsearch-svc ,

  4. kubectl describe svc kibana-svc ( here the LB IP is different than the one used in elasticsearch svc)

  5. Kibana pods elasticsearch-kb-http service is targeting.

Error Details:
Kibana is not working, because Elasticsearch is not running and not reachable. So looks like I'm missing something related to my certificate config in the elasticsearch.yaml file?

Elasticsearch

Kibana

what we are trying is, deploy 3 master and 3 data nodes and 2 kibana nodes in K8s cluster..

and we want to use the custom DNS created for both ES and Kibana and that should run in 443 port.

Kind Regards,
Esakki

@Sunile_Manjee Here is the latest status of my deployment,

Elasticsearch

Kibana

I'm not sure how to resolve that cert issue in elasticsearch, I tried few suggestions but not working.

Kind Regards,
Esakk

@Sunile_Manjee here is the last Elasticsearch.yaml that I have used.

apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
  name: elasticsearch
spec:
  version: 8.9.1
  http:
    tls:
      selfSignedCertificate:
        disabled: true
      certificate:
        secretName: esuatcert
    service:
      spec:
        type: LoadBalancer
       loadBalancerIP: 100.12.2.12
        ports:
          - name: https
            port: 443
  nodeSets:
  - name: master-nodes
    count: 3
    config:
      node.roles: ["master"]
    podTemplate:
      spec:
        initContainers:
        - name: sysctl
          securityContext:
            privileged: true
            runAsUser: 0
          command: ['sh', '-c', 'sysctl -w vm.max_map_count=262144']
          env:
            - name: ES_JAVA_OPTS
              value: -Xms2g -Xmx2g
            - name: bootstrap.memory_lock
              value: 'true'
    volumeClaimTemplates:
      - metadata:
          name: elasticsearch-data
        spec:
          accessModes:
            - ReadWriteOnce
          resources:
            requests:
              storage: 2Gi
              cpu: 500m
              memory: 1Gi
            limits:
              cpu: 500m
              memory: 1Gi
          storageClassName: standard
  - name: data-nodes
    count: 3
    config:
      node.roles: ["data"]
    podTemplate:
      spec:
        initContainers:
        - name: sysctl
          securityContext:
            privileged: true
            runAsUser: 0
          command: ['sh', '-c', 'sysctl -w vm.max_map_count=262144']
          env:
            - name: ES_JAVA_OPTS
              value: -Xms2g -Xmx2g
            - name: bootstrap.memory_lock
              value: 'true'
    volumeClaimTemplates:
      - metadata:
          name: elasticsearch-data
        spec:
          accessModes:
            - ReadWriteOnce
          resources:
            requests:
              storage: 5Gi
              cpu: 2000m
              memory: 4Gi
            limits:
              cpu: 2000m
              memory: 4Gi
          storageClassName: standard

Kind Regards,
Esakki

@Sunile_Manjee Could you please help what I'm missing here?

Regards,
Esakki

@Sunile_Manjee I can see that, my cert itself it's not been taken by the nodes/pods..

it's taking only the default elasticsearch certs..

I just noticed this in the browser while viewing the cert.

Common Name (CN) elasticsearch-http
Organization (O)

Kind Regards,
Esakki

Hi @Sunile_Manjee , Still facing the same certificate issue.. In the mean time as per my attached ES .yaml file I have 3 master and 3 data nodes, my client is asking for routing the client requests only to data nodes/pod not to master nodes how can I achieve this? is this possible?

Kind Regards,
Esakki

@Wes_Plunk Hello Sir, I come to know that you have enough experience with Elasticsearch (I just gone through one of your post you raised on 2013).. is that possible for you to help here on the above ask please?

Kind Regards,
Esakki

does your kibana manifest include

    selfSignedCertificate:
      disabled: true 

@Sunile_Manjee Hi Sunil, Thank you for your response.

Yeah yes, I have it.

  http:
    tls:
      selfSignedCertificate:
        disabled: true
      certificate:
        secretName: esuat-cert

Also, my client want only master and data nodes/pods and the requests should go only to the data nodes not to the master nodes, how to achieve this, right now when I hit my elasticsearch url (DNS), in browser the requests has been routed to either master or data nodes.

Kind Regards,
Esakki

try to track your latest status. you are able to reach ES via api and Kibana. However when you use your custom certs, you are not able to reach Kibana. Is that accurate? Please provide full kibana yaml and please take a look at Kibana logs to view errors.

@Sunile_Manjee here is the Kind-kibana.yaml that I have used.

apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
  name: elasticsearch
  namespace: uat-elasticsearch
spec:
  version: 8.9.1
  http:
    tls:
      selfSignedCertificate:
        disabled: true
      certificate:
        secretName: esuat-cert
    service:
      spec:
        type: LoadBalancer
        loadBalancerIP: 10.12.2.14
        ports:
          - name: https
            port: 443
  count: 2
  elasticsearchRef:
    name: elasticsearch

Attaching SS for your reference( I'm new to Elastic Search, may be I'm doing some simple mistake in config, but I'm not sure what is wrong here)
Kibana - working fine.

Elasticsearch taking only Local Certificate.

So, both Elasticsearch and Kibana deployments are working, only that "Not Secure" certificate error in Elasticsearch as highlighted in the above SS.

And I'm using same certificate for both Kibana and Elasticsearch but Load Balancer IP and the secrets are different.

Kind Regards,
Esakki

@Sunile_Manjee Any suggestions please? it's been a while and client is chasing for this :frowning:

Regards,
Esakki

@Sunile_Manjee I have resolved the certificate issue, when I used different certs for Elasticsearch and Kibana, cert issue got resolved. Now, when I launch Kibana URL it's not working, it's unable to reach/communicate with elasticsearch, what config changed I have to do in kibana.yaml can you give some sample code?

Kind Regards,
Esakki

@Sunile_Manjee here is my latest kibana.yaml

apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
  name: elasticsearch
  namespace: uat-elasticsearch
spec:
  version: 8.9.1
  http:
    tls:
      selfSignedCertificate:
        disabled: true
      certificate:
        secretName: esuat-cert
    service:
      spec:
        type: LoadBalancer
        loadBalancerIP: 10.1.2.1
        ports:
          - name: https
            port: 443
            protocol: TCP
            targetPort: 5601
  count: 2
  config:
    elasticsearch.hosts:
    - https://elasticsearch.appsuat.mydomain
    elasticsearch.ssl.certificate: /usr/share/kibana/config/elasticsearch-certs/tls.crt
  elasticsearchRef:
    name: elasticsearch

@Sunile_Manjee I kept different cert for elasticsearch and kibana that resolved the issue.