Unable to connect remote cluster

Hi Team,
We are trying to configure remote cluster with ECK and facing the below issue

cluster1 (Master) - Managed by ECK which has es, kibana & fluentd
cluster2 (Remote) - Managed by ECK which has es & fluentd

  1. In cluster1, Execute the eck.yml & configured fluentd. After this step, we are able to get indices by (curl -u elastic:XXXXXXXXX -k "https://elasticsearch-es-http:9200/_cat/indices)
eck.yml:
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
  name: elasticsearch
  namespace: es-test
  labels:
    app: elasticsearch
spec:
  version: 7.9.3
  nodeSets:
  - name: default
    count: 3
    config:
      node.master: true
      node.data: true
      node.ingest: true
      node.store.allow_mmap: false
  http:
    service:
      spec:
        type: LoadBalancer
        ports:
          - port: 9200
            targetPort: 9200
            protocol: TCP
			
---
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
  name: kibana
  namespace: es-test
spec:
  version: 7.9.3
  count: 1
  elasticsearchRef:
    name: elasticsearch
  http:
    tls:
      selfSignedCertificate:
        disabled: true
  1. In cluster2, Same step1 is followed except kibana.
  2. In cluster2, we have exposed svc with transport port 9300.
es-svc.yml:
apiVersion: v1
kind: Service
metadata:
  name: elasticsearch-logging
  namespace: "es-test"
  labels:
    app: elasticsearch
spec:
  selector:
    common.k8s.elastic.co/type: elasticsearch
    elasticsearch.k8s.elastic.co/cluster-name: elasticsearch
  ports:
  - port: 9200
    name: rest
  - port: 9300
    name: inter-node
cluster1:
[root@k8s-master01 eck]# kubectl get po -n es-test
NAME                         READY   STATUS             RESTARTS   AGE
elasticsearch-es-default-0   1/1     Running            0          97m
elasticsearch-es-default-1   1/1     Running            0          98m
elasticsearch-es-default-2   1/1     Running            0          100m
fluentd-8bb4q                1/1     Running            0          150m
fluentd-h8j6m                1/1     Running            0          150m
fluentd-h8lst                1/1     Running		0          150m
fluentd-p9j2g                1/1     Running            0          150m
kibana-kb-55c7584fd6-r62lc   1/1     Running            0          107m
[root@k8s-master01 eck]# kubectl get svc -n es-test
NAME                       TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)             AGE
elasticsearch-es-default   ClusterIP      None            <none>        <none>              5h5m
elasticsearch-es-http      LoadBalancer   x.x.x.x	  <pending>     9200:31300/TCP      5h5m
kibana-kb-http             ClusterIP      x.x.x.x	  <none>        5601/TCP            107m
cluster2:
[root@k8s-master01 eck]# kubectl get po -n es-test
NAME                         READY   STATUS             RESTARTS   AGE
elasticsearch-es-default-0   1/1     Running            0          97m
elasticsearch-es-default-1   1/1     Running            0          98m
elasticsearch-es-default-2   1/1     Running            0          100m
fluentd-8bb4q                1/1     Running            0          150m
fluentd-h8j6m                1/1     Running            0          150m
fluentd-h8lst                1/1     Running		0          150m
fluentd-p9j2g                1/1     Running            0          150m
[root@k8s-master01 eck]# kubectl get svc -n es-test
NAME                       TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)             AGE
elasticsearch-es-default   ClusterIP      None            <none>        <none>              5h5m
elasticsearch-es-http      LoadBalancer   x.x.x.x	  <pending>     9200:31300/TCP      5h5m
elasticsearch-logging      ClusterIP      x.x.x.x   	  <none>        9200/TCP,9300/TCP   9h
kibana-kb-http             ClusterIP      x.x.x.x	  <none>        5601/TCP            107m
  1. Copied remote.ca.crt from cluster2 to cluster1 and created secret in cluster1
kubectl get secret elasticsearch-es-transport-certs-public -n es-test -o go-template='{{index .data "ca.crt" | base64decode}}' > remote.ca.crt

kubectl create secret generic remote-certs --from-file=remote.ca.crt -n es-test

-- Same has been done from cluster1 to cluster2.

  1. In both cluster, updated elasticsearch with remote.ca.crt
  nodeSets:
  - config:
      xpack.security.transport.ssl.certificate_authorities:
      - /usr/share/elasticsearch/config/other/remote.ca.crt
    name: default
    count: 3
    podTemplate:
       spec:
         containers:
         - name: elasticsearch
           volumeMounts:
           - mountPath: /usr/share/elasticsearch/config/other
             name: remote-certs
         volumes:
           - name: remote-certs
             secret:
               secretName: remote-certs
  1. In cluster2, created virtual service for elasticsearch-logging.es-test:9300 with nodeip 1.1.1.1:9300

  2. In cluster1 kibana, remote cluster config

{
  "persistent": {
    "cluster": {
      "remote": {
        "cluster-test": {
          "mode": "proxy",
          "proxy_address": "1.1.1.1:9300"
        }
      }
    }
  }
}

We are getting below error in cluster1 and cluster2:

ES logs in cluster1:
Java exception with signature check failed and PKIX path validation failed

ES logs in cluster2:
{"type": "server", "timestamp": "2021-04-21T06:36:16,892Z", "level": "WARN", "component": "o.e.x.c.s.t.n.SecurityNetty4Transport", "cluster.name": "elasticsearch", "node.name": "elasticsearch-es-default-2", "message": "client did not trust this server's certificate, closing connection Netty4TcpChannel{localAddress=/x.x.x.x:9300, remoteAddress=/x.x.x.x:21989}", "cluster.uuid": "cEJL5C68Sqy7ZhT5yh3VSA", "node.id": "fBD_2ISDS0CIsDbzGXxMpw"  }

Is it a duplication of Remote cluster - TCP connection is not happened with Istio ingress or here you haven't configure Istio?

Can u check the certificate of the remote node? I suspect whole the CA may be good, the common name or subject alternative name doesn't match the IP you're using which is why you're getting the certificate validation error.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.