Elasticsearch mTLS connection to S3 (minio based) to store snapshots

Hi All,

I want to store snapshots in s3 storage where it requires mutual auth (mTLS) ( Note : S3 is minio based)

I have client.crt, client.key, root.crt at the client side.

I tried these steps,

apiVersion: elasticsearch.k8s.elastic.co/v1beta1
kind: Elasticsearch
metadata:
  name: eck
  namespace: test
spec:
  version: 7.2.0
  nodeSets:
  - count: 1
    name: master
    config:
      xpack.security.authc.realms: &xpack-realms
        file.file1:
          order: 0
        native.native1:
          order: 1
      node.master: true
      node.data: true
      node.ingest: true
      node.ml: false
      cluster.remote.connect: false
    podTemplate:
      spec:
        initContainers:
        - name: sysctl
          securityContext:
            privileged: true
          command: ['sh', '-c', 'sysctl -w vm.max_map_count=262144']
        - name: pem-to-keystore
          env:
          - name: keyfile
            value: /var/run/secrets/certs/client-key.pem
          - name: crtfile
            value: /var/run/secrets/certs/client-crt.pem
          - name: keystore_pkcs12
            value: /var/run/secrets/keystore/keystore.pkcs
          - name: keystore_jks
            value: /var/run/secrets/keystore/keystore.jks
          - name: password
            value: changeit
          - name: http_proxy
            value: http://192.30.48.24:8080
          - name: https_proxy
            value: http://192.30.48.24:8080
          command: ['/bin/bash']
          args: ['-c', "yum install -y openssl && openssl pkcs12 -export -inkey $keyfile -in $crtfile -out $keystore_pkcs12 -password pass:$password &&  /usr/share/elasticsearch/jdk/bin/keytool -importkeystore -noprompt -srckeystore $keystore_pkcs12 -srcstoretype pkcs12 -destkeystore $keystore_jks -storepass $password -srcstorepass $password"]
          volumeMounts:
           - name: keystore-volume
             mountPath: /var/run/secrets/keystore
           - name: s3-client-certs
             mountPath: /var/run/secrets/certs
        - name: pem-to-truststore
          env:
          - name: truststore_jks
            value: /var/run/secrets/keystore/truststore.jks
          - name: cafile
            value: /var/run/secrets/certs/ca-crt.pem
          - name: password
            value: changeit
          command: ['/bin/bash']
          args: ['-c', "/usr/share/elasticsearch/jdk/bin/keytool -import -alias mycert -file $cafile -keystore $truststore_jks -deststorepass $password -noprompt "]
          volumeMounts:
           - name: keystore-volume
             mountPath: /var/run/secrets/keystore
           - name: s3-client-certs
             mountPath: /var/run/secrets/certs
           - name: s3-bucket-auth
             mountPath: /var/run/secrets/auth
        - name: install-plugins
          env:
           - name: ES_JAVA_OPTS
             value: -Dhttp.proxyHost=192.30.48.24 -Dhttp.proxyPort=8080 -Dhttps.proxyHost=192.30.48.24 -Dhttps.proxyPort=8080
          command:
          - sh
          - -c
          - |
            bin/elasticsearch-plugin install --batch  repository-s3
        containers:
        - name: elasticsearch
          env:
           - name: ES_JAVA_OPTS
             value: -Xms1g -Xmx1g  -Dhttp.proxyHost=192.30.48.24 -Dhttp.proxyPort=8080 -Dhttps.proxyHost=192.30.48.24 -Dhttps.proxyPort=8080  -Djavax.net.ssl.trustStore=/var/run/secrets/keystore/truststore.jks -Djavax.net.ssl.trustStorePassword=changeit  -Djavax.net.ssl.trustStoreType=jks -Djavax.net.ssl.keyStore=/var/run/secrets/keystore/keystore.jks -Djavax.net.ssl.keyStorePassword=changeit -Djavax.net.ssl.keyStoreType=jks
          resources:
            limits:
              memory: 4Gi
              cpu: 1000m
          volumeMounts:
          - mountPath: /var/run/secrets/keystore
            name: keystore-volume
          - name: s3-client-certs
            mountPath: /var/run/secrets/certs
        volumes:
         - name: keystore-volume
           emptyDir: {}
         - name: s3-client-certs
           secret:
            secretName: gaja

    volumeClaimTemplates:
    - metadata:
        name: elasticsearch-data
      spec:
        accessModes:
        - ReadWriteOnce
        storageClassName: elastic-block
        resources:
          requests:
            storage: 100Gi

when I tried to create a bucket

PUT _snapshot/my_s3_repository
{
  "type": "s3",
  "settings": {
    "bucket": "mybucket",
    "endpoint": "s3.tally.srv.prod.k-net.com"
  }
}

Output: 

{
  "error": {
    "root_cause": [
      {
        "type": "repository_verification_exception",
        "reason": "[my_s3_repository] path  is not accessible on master node"
      }
    ],
    "type": "repository_verification_exception",
    "reason": "[my_s3_repository] path  is not accessible on master node",
    "caused_by": {
      "type": "i_o_exception",
      "reason": "Unable to upload object [tests-mK_2xuEeTHeKLpxWJidD_g/master.dat] using a single upload",
      "caused_by": {
        "type": "amazon_s3_exception",
        "reason": "SSL Certificate Required (Service: Amazon S3; Status Code: 496; Error Code: 496 SSL Certificate Required; Request ID: null; S3 Extended Request ID: null)"
      }
    }
  },
  "status": 500
}

I am not really knowing, what to do here. I am stuck here. can anyone please help me with this.

Thanks
Mahesh

1 Like

Not sure it's a good idea to change the SSL keystore for elasticsearch.

Did you manage to make it working, using the default "-cacerts" keystore?

Also, be sure you use the same endpoint (DNS) as the certificate declares.

Did you find the solution? I got a similar issue.