Elastic Search, not able to modify the settings

We have created an Elastic Search ECK instance in a Kubernetes cluster. We modified the volume from 1Ti to 2Ti. Now we want to make additional configurations. But, the elastic-operator instance is not recognizing the new volume, and giving the error:

denied the request: Elasticsearch.elasticsearch.k8s.elastic.co "elasticsearch" is invalid: spec.nodeSet[0].volumeClaimTemplates: Invalid value: v1.PersistentVolumeClaim(nil): Volume claim templates cannot be modified

We have huge volume of data already indexed into the nodes. We want to make modifications to the Elastic search, but we don't want it to impact the existing pvc's. Can this be done? if any guidance on such a scenario will be good.

Which version of ECK are you running? You can only increase persistent volumes storage size starting ECK 1.3.0. This is only possible if your persistent volume storage class allows inline resize.

If not, then you should revert your volume claim templates to the previous size (1Ti). This should be harmless to the current nodes.

See this documentation covering ECK 1.4.0 for more details.

Thank you for the update.

Our ECK version is 1.16.15.
You have mentioned 'persistent volume storage class allows inline resize'. Can you please let me know, how can we check this for our instance.
Also, is there anything to do with the Reclaim policy of the PV. Our current Reclaim policy is defaulted to 'Delete'. Will this cause the PV's to be deleted, once we unbound it from our Elastic node.

My Node is already consuming 1.5Ti of data. If I resize it to 1Ti, I will be losing out on the data.
I do not want to lose the data that is available.

1.16.15 looks like your Kubernetes version. Assuming you installed ECK using the Helm chart and you are running ECK 1.4.0 (you can check this by running helm list --all-namespaces | grep eck-operator) you could be running into a known issue with the validating webhook.

If the above assumptions are correct (ECK 1.4.0 installed using Helm), you can try patching the validating webhook as follows before retrying the volume expansion operation.

WEBHOOK=$(kubectl get validatingwebhookconfiguration --no-headers -o custom-columns=NAME:.metadata.name | grep 'k8s.elastic.co')
kubectl patch validatingwebhookconfiguration "$WEBHOOK" --patch='{"webhooks": [{"name": "elastic-es-validation-v1.k8s.elastic.co", "matchPolicy": "Exact"}, {"name": "elastic-es-validation-v1beta1.k8s.elastic.co", "matchPolicy": "Exact"}]}'

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.