Unable to run quickstart on bare metal k8s with error "Failed to apply spec change: adjust resources: adjust discovery config: Operation cannot be fulfilled on..."

I'm trying to install Elasticsearch quickstart on bare metal kubernetes and es pods remain in a "Init:CrashLoopBackOff" state and never start.

What I discovered as you can see from events log, is a message saying "Failed to apply spec change: adjust resources: adjust discovery config: Operation cannot be fulfilled on elasticsearches.elasticsearch.k8s.elastic.co "quickstart": the object has been modified; please apply your changes to the latest version and try again"

kubectl get es

NAME         HEALTH    NODES   VERSION   PHASE             AGE
quickstart   unknown           7.14.2    ApplyingChanges   80m

kubectl get events

47m         Normal    LeaderElection            endpoints/cluster.local-nfs-subdir-external-provisioner            nfs-subdir-external-provisioner-d6cc9d55-9s4sq_e7f87e30-0c6a-4955-88e5-61d014355f2e became leader
43m         Normal    ExternalProvisioning      persistentvolumeclaim/elasticsearch-data-quickstart-es-default-0   waiting for a volume to be created, either by external provisioner "cluster.local/nfs-subdir-external-provisioner" or manually created by system administrator
43m         Normal    Provisioning              persistentvolumeclaim/elasticsearch-data-quickstart-es-default-0   External provisioner is provisioning volume for claim "default/elasticsearch-data-quickstart-es-default-0"
43m         Normal    ProvisioningSucceeded     persistentvolumeclaim/elasticsearch-data-quickstart-es-default-0   Successfully provisioned volume pvc-ec31c1f7-e2ee-4dec-8583-ee89850fb427
47m         Normal    Scheduled                 pod/nfs-subdir-external-provisioner-d6cc9d55-9s4sq                 Successfully assigned default/nfs-subdir-external-provisioner-d6cc9d55-9s4sq to node2
47m         Normal    Pulled                    pod/nfs-subdir-external-provisioner-d6cc9d55-9s4sq                 Container image "k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2" already present on machine
47m         Normal    Created                   pod/nfs-subdir-external-provisioner-d6cc9d55-9s4sq                 Created container nfs-subdir-external-provisioner
47m         Normal    Started                   pod/nfs-subdir-external-provisioner-d6cc9d55-9s4sq                 Started container nfs-subdir-external-provisioner
47m         Normal    SuccessfulCreate          replicaset/nfs-subdir-external-provisioner-d6cc9d55                Created pod: nfs-subdir-external-provisioner-d6cc9d55-9s4sq
47m         Normal    ScalingReplicaSet         deployment/nfs-subdir-external-provisioner                         Scaled up replica set nfs-subdir-external-provisioner-d6cc9d55 to 1
49m         Warning   FailedScheduling          pod/nginx-deployment-644599b9c9-ljrpt                              0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.
49m         Normal    Scheduled                 pod/nginx-deployment-644599b9c9-ljrpt                              Successfully assigned default/nginx-deployment-644599b9c9-ljrpt to node2
49m         Normal    TaintManagerEviction      pod/nginx-deployment-644599b9c9-ljrpt                              Cancelling deletion of Pod default/nginx-deployment-644599b9c9-ljrpt
48m         Normal    Pulled                    pod/nginx-deployment-644599b9c9-ljrpt                              Container image "nginx:1.16" already present on machine
48m         Normal    Created                   pod/nginx-deployment-644599b9c9-ljrpt                              Created container nginx
48m         Normal    Started                   pod/nginx-deployment-644599b9c9-ljrpt                              Started container nginx
49m         Warning   FailedScheduling          pod/nginx-deployment-644599b9c9-qc6m8                              0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.
49m         Normal    Scheduled                 pod/nginx-deployment-644599b9c9-qc6m8                              Successfully assigned default/nginx-deployment-644599b9c9-qc6m8 to node2
49m         Normal    TaintManagerEviction      pod/nginx-deployment-644599b9c9-qc6m8                              Cancelling deletion of Pod default/nginx-deployment-644599b9c9-qc6m8
49m         Normal    Pulled                    pod/nginx-deployment-644599b9c9-qc6m8                              Container image "nginx:1.16" already present on machine
48m         Normal    Created                   pod/nginx-deployment-644599b9c9-qc6m8                              Created container nginx
48m         Normal    Started                   pod/nginx-deployment-644599b9c9-qc6m8                              Started container nginx
55m         Normal    SuccessfulCreate          replicaset/nginx-deployment-644599b9c9                             Created pod: nginx-deployment-644599b9c9-qc6m8
55m         Normal    SuccessfulCreate          replicaset/nginx-deployment-644599b9c9                             Created pod: nginx-deployment-644599b9c9-ljrpt
55m         Normal    ScalingReplicaSet         deployment/nginx-deployment                                        Scaled up replica set nginx-deployment-644599b9c9 to 2
55m         Normal    NodeHasSufficientMemory   node/node1                                                         Node node1 status is now: NodeHasSufficientMemory
55m         Normal    NodeHasNoDiskPressure     node/node1                                                         Node node1 status is now: NodeHasNoDiskPressure
55m         Normal    NodeHasSufficientPID      node/node1                                                         Node node1 status is now: NodeHasSufficientPID
55m         Normal    Starting                  node/node1                                                         Starting kubelet.
55m         Normal    NodeAllocatableEnforced   node/node1                                                         Updated Node Allocatable limit across pods
55m         Normal    NodeHasSufficientMemory   node/node1                                                         Node node1 status is now: NodeHasSufficientMemory
55m         Normal    NodeHasNoDiskPressure     node/node1                                                         Node node1 status is now: NodeHasNoDiskPressure
55m         Normal    NodeHasSufficientPID      node/node1                                                         Node node1 status is now: NodeHasSufficientPID
55m         Normal    RegisteredNode            node/node1                                                         Node node1 event: Registered Node node1 in Controller
54m         Normal    Starting                  node/node1                                                         
54m         Normal    NodeReady                 node/node1                                                         Node node1 status is now: NodeReady
49m         Normal    Starting                  node/node2                                                         Starting kubelet.
49m         Normal    NodeHasSufficientMemory   node/node2                                                         Node node2 status is now: NodeHasSufficientMemory
49m         Normal    NodeHasNoDiskPressure     node/node2                                                         Node node2 status is now: NodeHasNoDiskPressure
49m         Normal    NodeHasSufficientPID      node/node2                                                         Node node2 status is now: NodeHasSufficientPID
49m         Normal    NodeAllocatableEnforced   node/node2                                                         Updated Node Allocatable limit across pods
3m56s       Normal    CIDRNotAvailable          node/node2                                                         Node node2 status is now: CIDRNotAvailable
49m         Normal    RegisteredNode            node/node2                                                         Node node2 event: Registered Node node2 in Controller
49m         Normal    Starting                  node/node2                                                         
49m         Normal    NodeReady                 node/node2                                                         Node node2 status is now: NodeReady
43m         Warning   FailedScheduling          pod/quickstart-es-default-0                                        0/2 nodes are available: 2 pod has unbound immediate PersistentVolumeClaims.
43m         Normal    Scheduled                 pod/quickstart-es-default-0                                        Successfully assigned default/quickstart-es-default-0 to node2
42m         Normal    Pulled                    pod/quickstart-es-default-0                                        Container image "docker.elastic.co/elasticsearch/elasticsearch:7.14.2" already present on machine
42m         Normal    Created                   pod/quickstart-es-default-0                                        Created container elastic-internal-init-filesystem
42m         Normal    Started                   pod/quickstart-es-default-0                                        Started container elastic-internal-init-filesystem
3m42s       Warning   BackOff                   pod/quickstart-es-default-0                                        Back-off restarting failed container
43m         Normal    SuccessfulCreate          statefulset/quickstart-es-default                                  create Claim elasticsearch-data-quickstart-es-default-0 Pod quickstart-es-default-0 in StatefulSet quickstart-es-default success
43m         Normal    SuccessfulCreate          statefulset/quickstart-es-default                                  create Pod quickstart-es-default-0 in StatefulSet quickstart-es-default successful
43m         Normal    AssociationStatusChange   elasticsearch/quickstart                                           Association status changed from [] to []
43m         Warning   ReconciliationError       elasticsearch/quickstart                                           Failed to apply spec change: adjust resources: adjust discovery config: Operation cannot be fulfilled on elasticsearches.elasticsearch.k8s.elastic.co "quickstart": the object has been modified; please apply your changes to the latest version and try again

kubectl get all

NAME                                                 READY   STATUS                  RESTARTS         AGE
pod/nfs-subdir-external-provisioner-d6cc9d55-9s4sq   1/1     Running                 0                85m
pod/nginx-deployment-644599b9c9-ljrpt                1/1     Running                 0                92m
pod/nginx-deployment-644599b9c9-qc6m8                1/1     Running                 0                92m
pod/quickstart-es-default-0                          0/1     Init:CrashLoopBackOff   20 (3m50s ago)   81m

NAME                              TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
service/kubernetes                ClusterIP   192.168.2.1     <none>        443/TCP        93m
service/nginx-service             NodePort    192.168.2.162   <none>        85:30007/TCP   89m
service/quickstart-es-default     ClusterIP   None            <none>        9200/TCP       81m
service/quickstart-es-http        ClusterIP   192.168.2.60    <none>        9200/TCP       81m
service/quickstart-es-transport   ClusterIP   None            <none>        9300/TCP       81m

NAME                                              READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/nfs-subdir-external-provisioner   1/1     1            1           85m
deployment.apps/nginx-deployment                  2/2     2            2           92m

NAME                                                       DESIRED   CURRENT   READY   AGE
replicaset.apps/nfs-subdir-external-provisioner-d6cc9d55   1         1         1       85m
replicaset.apps/nginx-deployment-644599b9c9                2         2         2       92m

NAME                                     READY   AGE
statefulset.apps/quickstart-es-default   0/1     81m

This is a temporary error which may occur during concurrent object updates.

Could you check the logs of the init containers with kubectl logs quickstart-es-default-0 --all-containers ?
Thanks

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.