Install only one elasticsearch master with helm?


#1

Hello!
I try to install on a elasticsearch with helm on Kubernetes.

With the almost by default settings, the elasticsearch is installed.

helm install --namespace efk --name elasticsearch elastic/elasticsearch --version 6.6.0-alpha1 --set resources.requests.memory=1Gi --set volumeClaimTemplate.storageClassName=nfs --set volumeClaimTemplate.resources.requests.storage=100Gi

All OK.

But I do not need as much as 3 pod (replicas). What are the settings I need to have only one master? I install for educational purposes.

helm install --namespace efk --name elasticsearch elastic/elasticsearch --version 6.6.0-alpha1 --set replicas=1 --set minimumMasterNodes=1 --set resources.requests.memory=1Gi --set volumeClaimTemplate.storageClassName=nfs --set volumeClaimTemplate.resources.requests.storage=200Gi

With these settings, I get 1 replica, but it does not start.
I get the following error.
not enough master nodes discovered during pinging fix

Installing the seventh version returns this error.

helm install --namespace efk --name elasticsearch elastic/elasticsearch --set resources.requests.memory=1Gi --set volumeClaimTemplate.storageClassName=nfs --set volumeClaimTemplate.resources.requests.storage=100Gi --set imageTag=7.0.0-alpha2 --set esMajorVersion=7

org.elasticsearch.discovery.MasterNotDiscoveredException: null
	at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$4.onTimeout(TransportMasterNodeAction.java:259) [elasticsearch-7.0.0-alpha2.jar:7.0.0-alpha2]
	at org.elasticsearch.cluster.ClusterStateObserver$ContextPreservingListener.onTimeout(ClusterStateObserver.java:322) [elasticsearch-7.0.0-alpha2.jar:7.0.0-alpha2]
	at org.elasticsearch.cluster.ClusterStateObserver$ObserverClusterStateListener.onTimeout(ClusterStateObserver.java:249) [elasticsearch-7.0.0-alpha2.jar:7.0.0-alpha2]
	at org.elasticsearch.cluster.service.ClusterApplierService$NotifyTimeout.run(ClusterApplierService.java:561) [elasticsearch-7.0.0-alpha2.jar:7.0.0-alpha2]
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:660) [elasticsearch-7.0.0-alpha2.jar:7.0.0-alpha2]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
	at java.lang.Thread.run(Thread.java:834) [?:?]
[2019-02-11T13:40:13,876][WARN ][o.e.c.c.ClusterFormationFailureHelper] [elasticsearch-master-0] master not discovered yet, this node has not previously joined a bootstrapped (v7+) cluster, and [cluster.initial_master_nodes] is empty on this node: have discovered [{elasticsearch-master-2}{9b0L87JqRH2kZvdCdNsTFg}{R1ZgC5foT6-lDcafAhqOYw}{10.233.102.178}{10.233.102.178:9300}{ml.machine_memory=2147483648, ml.max_open_jobs=20, xpack.installed=true, ml.enabled=true}, {elasticsearch-master-1}{pG6FkrF7Q5CaTRh705_f-w}{iwnpJ1YRQNOBXjE9WEKtIg}{10.233.75.45}{10.233.75.45:9300}{ml.machine_memory=2147483648, ml.max_open_jobs=20, xpack.installed=true, ml.enabled=true}]; discovery will continue using [10.233.75.45:9300, 10.233.102.178:9300] from hosts providers and [{elasticsearch-master-0}{I9nurv-CROaTEkhVQoaPZQ}{cnbBODWOSGyFKmz59KQ9Ow}{10.233.71.33}{10.233.71.33:9300}{ml.machine_memory=2147483648, xpack.installed=true, ml.max_open_jobs=20, ml.enabled=true}] from last-known cluster state

My question is:
How to set up so that there is only one master?
Thanks!


(Michael Russell) #2

Hi JDev!

The commands you posted are actually correct. However I think the issue you are running into here is that you tried to redeploy the same cluster without removing the pvcs (persistent volume claims) from the previous three node cluster. This means the single node master started up thinking it was part of a 3 node cluster still.

The log you posted at the end mentions elasticsearch-master-2 which confirms my theory.

[2019-02-11T13:40:13,876][WARN ][o.e.c.c.ClusterFormationFailureHelper] [elasticsearch-master-0] master not discovered yet, this node has not previously joined a bootstrapped (v7+) cluster, and [cluster.initial_master_nodes] is empty on this node: have discovered [{elasticsearch-master-2}

Since you mentioned that you are doing this for "educational purposes" the best thing to do here is to remove the current helm release. Then delete the pvcs (Kubernetes does not remove these automatically when deleting a statefulset).

$ kubectl get pvcs
NAME                                          STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
elasticsearch-master-elasticsearch-master-0   Bound     pvc-94ac3db1-3057-11e9-8303-42010a800173   30Gi       RWO            standard       4m
elasticsearch-master-elasticsearch-master-1   Bound     pvc-94ac3db1-3057-11e9-8303-42010a800173   30Gi       RWO            standard       4m
elasticsearch-master-elasticsearch-master-2   Bound     pvc-94ac3db1-3057-11e9-8303-42010a800173   30Gi       RWO            standard       4m
$ kubectl delete elasticsearch-master-elasticsearch-master-0 elasticsearch-master-elasticsearch-master-1 elasticsearch-master-elasticsearch-master-2

#3

Yes you are right. I tested on the same pvc. 6 version is installed, but 7 is not. Can you tell me what's wrong with 7?
Thanks for the help.

helm install --namespace spark --name test elastic/elasticsearch --set replicas=1 --set minimumMasterNodes=1 --set resources.requests.memory=1Gi --set volumeClaimTemplate.storageClassName=nfs --set volumeClaimTemplate.resources.requests.storage=300Gi --set imageTag=7.0.0-alpha2 --set esMajorVersion=7

I get this error.

[2019-02-15T07:01:58,332][WARN ][r.suppressed             ] [elasticsearch-master-0] path: /_cluster/health, params: {wait_for_status=green, timeout=1s}
org.elasticsearch.discovery.MasterNotDiscoveredException: null
	at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$4.onTimeout(TransportMasterNodeAction.java:259) [elasticsearch-7.0.0-alpha2.jar:7.0.0-alpha2]
	at org.elasticsearch.cluster.ClusterStateObserver$ContextPreservingListener.onTimeout(ClusterStateObserver.java:322) [elasticsearch-7.0.0-alpha2.jar:7.0.0-alpha2]
	at org.elasticsearch.cluster.ClusterStateObserver$ObserverClusterStateListener.onTimeout(ClusterStateObserver.java:249) [elasticsearch-7.0.0-alpha2.jar:7.0.0-alpha2]
	at org.elasticsearch.cluster.service.ClusterApplierService$NotifyTimeout.run(ClusterApplierService.java:561) [elasticsearch-7.0.0-alpha2.jar:7.0.0-alpha2]
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:660) [elasticsearch-7.0.0-alpha2.jar:7.0.0-alpha2]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
	at java.lang.Thread.run(Thread.java:834) [?:?]
[2019-02-15T07:02:07,266][WARN ][o.e.c.c.ClusterFormationFailureHelper] [elasticsearch-master-0] master not discovered yet, this node has not previously joined a bootstrapped (v7+) cluster, and [cluster.initial_master_nodes] is empty on this node: have discovered []; discovery will continue using [] from hosts providers and [{elasticsearch-master-0}{ITllzlzoSh-IpTItifnj-w}{h1n6gBr7RouqrXJwP81ezA}{10.233.71.34}{10.233.71.34:9300}{ml.machine_memory=2147483648, xpack.installed=true, ml.max_open_jobs=20, ml.enabled=true}] from last-known cluster state
[2019-02-15T07:02:08,361][WARN ][r.suppressed             ] [elasticsearch-master-0] path: /_cluster/health, params: {wait_for_status=green, timeout=1s}
org.elasticsearch.discovery.MasterNotDiscoveredException: null
	at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$4.onTimeout(TransportMasterNodeAction.java:259) [elasticsearch-7.0.0-alpha2.jar:7.0.0-alpha2]
	at org.elasticsearch.cluster.ClusterStateObserver$ContextPreservingListener.onTimeout(ClusterStateObserver.java:322) [elasticsearch-7.0.0-alpha2.jar:7.0.0-alpha2]
	at org.elasticsearch.cluster.ClusterStateObserver$ObserverClusterStateListener.onTimeout(ClusterStateObserver.java:249) [elasticsearch-7.0.0-alpha2.jar:7.0.0-alpha2]
	at org.elasticsearch.cluster.service.ClusterApplierService$NotifyTimeout.run(ClusterApplierService.java:561) [elasticsearch-7.0.0-alpha2.jar:7.0.0-alpha2]
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:660) [elasticsearch-7.0.0-alpha2.jar:7.0.0-alpha2]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
	at java.lang.Thread.run(Thread.java:834) [?:?]
[2019-02-15T07:02:17,268][WARN ][o.e.c.c.ClusterFormationFailureHelper] [elasticsearch-master-0] master not discovered yet, this node has not previously joined a bootstrapped (v7+) cluster, and [cluster.initial_master_nodes] is empty on this node: have discovered []; discovery will continue using [] from hosts providers and [{elasticsearch-master-0}{ITllzlzoSh-IpTItifnj-w}{h1n6gBr7RouqrXJwP81ezA}{10.233.71.34}{10.233.71.34:9300}{ml.machine_memory=2147483648, xpack.installed=true, ml.max_open_jobs=20, ml.enabled=true}] from last-known cluster state

(Christian Dahlqvist) #4

Why would you want to run a cluster with only one master-eligible node and give up high availability in the first place?


#5

Because I do this for educational purposes. Accordingly, I have limited resources. Productively, of course, need more.


(Michael Russell) #6

This should also just work for a v7 cluster. I just tested it using the 7-alpha example with the addition of replicas: 1 and it's working as expected.

Can you show me the output of these commands:

kubectl get pvc
kubectl get sts
kubectl get pods
kubectl get sts -o yaml

and [cluster.initial_master_nodes] is empty on this node

Could potentially be the issue. I'm wondering whether setting --set esMajorVersion=7 is resulting in a string versus an integer when it isn't coming directly from a yaml file.


#7

Maybe I had the same problem as above? I tried to install different versions without removing pvc. Now try again with pure pvc.

I have this question. And how to be updated in docker containers in this case? For example, I now have version 6.6.0. Recently released version 7 beta. Can I upgrade only by removing the old one and creating a new deployment? As far as I can see, pvс exists only for elasticsearch indexes?


#8

No, it's not working.

I am starting helm wit this configuration without precreated pvc. It creates automatically with deployment.

helm install elastic/elasticsearch --namespace efk --name efk-elasticsearch --set imageTag=7.0.0-beta1 --set replicas=1 --set minimumMasterNodes=1 --set resources.requests.memory=1Gi --set volumeClaimTemplate.storageClassName=nfs --set volumeClaimTemplate.resources.requests.storage=400Gi

Here is the log:

{"type": "server", "timestamp": "2019-02-18T14:01:45,621+0000", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0",  "message": "JVM arguments [-Xms1g, -Xmx1g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.io.tmpdir=/tmp/elasticsearch-7938694163300661173, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=data, -XX:ErrorFile=logs/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -Djava.locale.providers=COMPAT, -XX:UseAVX=2, -Dio.netty.allocator.type=unpooled, -Des.cgroups.hierarchy.override=/, -Xmx1g, -Xms1g, -Des.path.home=/usr/share/elasticsearch, -Des.path.conf=/usr/share/elasticsearch/config, -Des.distribution.flavor=default, -Des.distribution.type=tar]"  }
{"type": "server", "timestamp": "2019-02-18T14:02:36,719+0000", "level": "WARN", "component": "o.e.c.c.ClusterFormationFailureHelper", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0",  "message": "master not discovered yet, this node has not previously joined a bootstrapped (v7+) cluster, and [cluster.initial_master_nodes] is empty on this node: have discovered []; discovery will continue using [] from hosts providers and [{elasticsearch-master-0}{SBil4Xa-RSe2MB98xmb-uw}{CWOZF4MhTnyp-BzkfxgGog}{10.233.71.12}{10.233.71.12:9300}{ml.machine_memory=2147483648, xpack.installed=true, ml.max_open_jobs=20}] from last-known cluster state; node term 0, last-accepted version 0 in term 0"  }
{"type": "server", "timestamp": "2019-02-18T14:02:46,722+0000", "level": "WARN", "component": "o.e.c.c.ClusterFormationFailureHelper", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0",  "message": "master not discovered yet, this node has not previously joined a bootstrapped (v7+) cluster, and [cluster.initial_master_nodes] is empty on this node: have discovered []; discovery will continue using [] from hosts providers and [{elasticsearch-master-0}{SBil4Xa-RSe2MB98xmb-uw}{CWOZF4MhTnyp-BzkfxgGog}{10.233.71.12}{10.233.71.12:9300}{ml.machine_memory=2147483648, xpack.installed=true, ml.max_open_jobs=20}] from last-known cluster state; node term 0, last-accepted version 0 in term 0"  }
{"type": "server", "timestamp": "2019-02-18T14:02:56,725+0000", "level": "WARN", "component": "o.e.c.c.ClusterFormationFailureHelper", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0",  "message": "master not discovered yet, this node has not previously joined a bootstrapped (v7+) cluster, and [cluster.initial_master_nodes] is empty on this node: have discovered []; discovery will continue using [] from hosts providers and [{elasticsearch-master-0}{SBil4Xa-RSe2MB98xmb-uw}{CWOZF4MhTnyp-BzkfxgGog}{10.233.71.12}{10.233.71.12:9300}{ml.machine_memory=2147483648, xpack.installed=true, ml.max_open_jobs=20}] from last-known cluster state; node term 0, last-accepted version 0 in term 0"  }
{"type": "server", "timestamp": "2019-02-18T14:02:56,799+0000", "level": "WARN", "component": "o.e.n.Node", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0",  "message": "timed out while waiting for initial discovery state - timeout: 30s"  }
{"type": "server", "timestamp": "2019-02-18T14:02:56,832+0000", "level": "INFO", "component": "o.e.h.AbstractHttpServerTransport", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0",  "message": "publish_address {10.233.71.12:9200}, bound_addresses {0.0.0.0:9200}"  }
{"type": "server", "timestamp": "2019-02-18T14:02:56,833+0000", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0",  "message": "started"  }
{"type": "server", "timestamp": "2019-02-18T14:03:00,046+0000", "level": "DEBUG", "component": "o.e.a.a.c.h.TransportClusterHealthAction", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0",  "message": "no known master node, scheduling a retry"  }
{"type": "server", "timestamp": "2019-02-18T14:03:01,051+0000", "level": "DEBUG", "component": "o.e.a.a.c.h.TransportClusterHealthAction", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0",  "message": "timed out while retrying [cluster:monitor/health] after failure (timeout [1s])"  }
{"type": "server", "timestamp": "2019-02-18T14:03:01,054+0000", "level": "WARN", "component": "r.suppressed", "cluster.name": "elasticsearch", "node.name": "elasticsearch-master-0",  "message": "path: /_cluster/health, params: {wait_for_status=green, timeout=1s}" ,
"stacktrace": ["org.elasticsearch.discovery.MasterNotDiscoveredException: null",
"at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$4.onTimeout(TransportMasterNodeAction.java:259) [elasticsearch-7.0.0-beta1.jar:7.0.0-beta1]",
"at org.elasticsearch.cluster.ClusterStateObserver$ContextPreservingListener.onTimeout(ClusterStateObserver.java:322) [elasticsearch-7.0.0-beta1.jar:7.0.0-beta1]",
"at org.elasticsearch.cluster.ClusterStateObserver$ObserverClusterStateListener.onTimeout(ClusterStateObserver.java:249) [elasticsearch-7.0.0-beta1.jar:7.0.0-beta1]",
"at org.elasticsearch.cluster.service.ClusterApplierService$NotifyTimeout.run(ClusterApplierService.java:549) [elasticsearch-7.0.0-beta1.jar:7.0.0-beta1]",
"at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:681) [elasticsearch-7.0.0-beta1.jar:7.0.0-beta1]",
"at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]",
"at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]",
"at java.lang.Thread.run(Thread.java:834) [?:?]"] }

(Michael Russell) #9

In the command above you are not setting --set esMajorVersion=7 which you were before. Could you try again and give me the following information afterwards?

kubectl get pvc
kubectl get sts
kubectl get pods
kubectl get sts -o yaml

Another side note: --set minimumMasterNodes=1 is not needed for Elasticsearch 7.


(Michael Russell) #10

I have this question. And how to be updated in docker containers in this case? For example, I now have version 6.6.0. Recently released version 7 beta. Can I upgrade only by removing the old one and creating a new deployment? As far as I can see, pvс exists only for elasticsearch indexes?

You won't need to remove the old deployment. You can update the esMajorVersion and imageTag to do the upgrade. But removing it completely and deploying a new one should work just fine as long as you are re-using the same pvcs.

The issue you were running into earlier is because you tried to redeploy a 3 node cluster with a single node that was still trying to discover the old masters.


#11
helm install elastic/elasticsearch --namespace efk --name test-efk --set imageTag=7.0.0-beta1 --set replicas=1 --set esMajorVersion=7 --set resources.requests.memory=1Gi --set volumeClaimTemplate.storageClassName=nfs --set volumeClaimTemplate.resources.requests.storage=10Gi

No, it does not start. The same mistake.

[root@master ~]# kubectl get pvc --namespace efk
NAME                                          STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
elasticsearch-master-elasticsearch-master-0   Bound    pvc-3504cd9f-3444-11e9-a95e-0050563c3afe   10Gi       RWO            nfs            5m55s
[root@master ~]# ^C
[root@master ~]# kubectl get sts --namespace efk
NAME                   READY   AGE
elasticsearch-master   0/1     6m29s
[root@master ~]# kubectl get pods --namespace efk
NAME                     READY   STATUS    RESTARTS   AGE
elasticsearch-master-0   0/1     Running   0          6m42s

#12

And yaml

    apiVersion: v1
    items:
    - apiVersion: apps/v1
      kind: StatefulSet
      metadata:
        creationTimestamp: "2019-02-19T12:45:16Z"
        generation: 1
        labels:
          app: elasticsearch-master
          chart: elasticsearch-6.5.0
          heritage: Tiller
          release: test-efk
        name: elasticsearch-master
        namespace: efk
        resourceVersion: "7459192"
        selfLink: /apis/apps/v1/namespaces/efk/statefulsets/elasticsearch-master
        uid: 34d3c417-3444-11e9-a95e-0050563c3afe
      spec:
        podManagementPolicy: Parallel
        replicas: 1
        revisionHistoryLimit: 10
        selector:
          matchLabels:
            app: elasticsearch-master
        serviceName: elasticsearch-master-headless
        template:
          metadata:
            creationTimestamp: null
            labels:
              app: elasticsearch-master
              chart: elasticsearch-6.5.0
              heritage: Tiller
              release: test-efk
            name: elasticsearch-master
          spec:
            affinity:
              podAntiAffinity:
                requiredDuringSchedulingIgnoredDuringExecution:
                - labelSelector:
                    matchExpressions:
                    - key: app
                      operator: In
                      values:
                      - elasticsearch-master
                  topologyKey: kubernetes.io/hostname
            containers:
            - env:
              - name: node.name
                valueFrom:
                  fieldRef:
                    apiVersion: v1
                    fieldPath: metadata.name
              - name: discovery.zen.ping.unicast.hosts
                value: elasticsearch-master-headless
              - name: cluster.name
                value: elasticsearch
              - name: discovery.zen.minimum_master_nodes
                value: "2"
              - name: network.host
                value: 0.0.0.0
              - name: ES_JAVA_OPTS
                value: -Xmx1g -Xms1g
              - name: node.master
                value: "true"
              - name: node.data
                value: "true"
              - name: node.ingest
                value: "true"
              image: docker.elastic.co/elasticsearch/elasticsearch:7.0.0-beta1
              imagePullPolicy: IfNotPresent
              name: elasticsearch
              ports:
              - containerPort: 9200
                name: http
                protocol: TCP
              - containerPort: 9300
                name: transport
                protocol: TCP
              readinessProbe:
                exec:
                  command:
                  - sh
                  - -c
                  - |
                    #!/usr/bin/env bash -e
                    # If the node is starting up wait for the cluster to be green
                    # Once it has started only check that the node itself is responding
                    START_FILE=/tmp/.es_start_file

                    http () {
                        local path="${1}"
                        if [ -n "${ELASTIC_USERNAME}" ] && [ -n "${ELASTIC_PASSWORD}" ]; then
                          BASIC_AUTH="-u ${ELASTIC_USERNAME}:${ELASTIC_PASSWORD}"
                        else
                          BASIC_AUTH=''
                        fi
                        curl -XGET -s -k --fail ${BASIC_AUTH} http://127.0.0.1:9200${path}
                    }

                    if [ -f "${START_FILE}" ]; then
                        echo 'Elasticsearch is already running, lets check the node is healthy'
                        http "/"
                    else
                        echo 'Waiting for elasticsearch cluster to become green'
                        if http "/_cluster/health?wait_for_status=green&timeout=1s" ; then
                            touch ${START_FILE}
                            exit 0
                        else
                            echo 'Cluster is not yet green'
                            exit 1
                        fi
                    fi
                failureThreshold: 3
                initialDelaySeconds: 10
                periodSeconds: 10
                successThreshold: 3
                timeoutSeconds: 5
              resources:
                limits:
                  cpu: "1"
                  memory: 2Gi
                requests:
                  cpu: 100m
                  memory: 1Gi
              terminationMessagePath: /dev/termination-log
              terminationMessagePolicy: File
              volumeMounts:
              - mountPath: /usr/share/elasticsearch/data
                name: elasticsearch-master
            dnsPolicy: ClusterFirst
            initContainers:
            - command:
              - sysctl
              - -w
              - vm.max_map_count=262144
              image: docker.elastic.co/elasticsearch/elasticsearch:7.0.0-beta1
              imagePullPolicy: IfNotPresent
              name: configure-sysctl
              resources: {}
              securityContext:
                privileged: true
                procMount: Default
                runAsUser: 0
              terminationMessagePath: /dev/termination-log
              terminationMessagePolicy: File
            restartPolicy: Always
            schedulerName: default-scheduler
            securityContext:
              fsGroup: 1000
            terminationGracePeriodSeconds: 120
        updateStrategy:
          type: RollingUpdate
        volumeClaimTemplates:
        - metadata:
            creationTimestamp: null
            name: elasticsearch-master
          spec:
            accessModes:
            - ReadWriteOnce
            dataSource: null
            resources:
              requests:
                storage: 10Gi
            storageClassName: nfs
            volumeMode: Filesystem
          status:
            phase: Pending
      status:
        collisionCount: 0
        currentReplicas: 1
        currentRevision: elasticsearch-master-54958c8fbc
        observedGeneration: 1
        replicas: 1
        updateRevision: elasticsearch-master-54958c8fbc
        updatedReplicas: 1
    kind: List
    metadata:

(Michael Russell) #13

Version 6.5.0 of the chart did not have support for Elasticsearch 7. Can you try it with the latest version?

--version 6.6.0-alpha1

#14

Hmm, it works that way. I thought that the version makes some installation and configuration for the version 6. Thanks! Understood.


(system) closed #15

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.