I was deploying elasticsearch cluster as statefulset. I upgraded elastic-operator to 1.3.0. I can see that volume expansion is supported for 1.3.0 release:
But still I am seeing the old storage size. Is volume expansion supported for elastic-operator:1.3.0 release?
As indicated by the docs, it will only work if your storage provisioner itself supports volume expansion. You also need the storage class to have allowVolumeExpansion: true.
If it does not work for you, can you please provide your elasticsearch resource yml manifest, along with the manifest of the storage class you are using, and the PVCs & PVs resources yml manifests?
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: elasticsearch-sample
spec:
version: 7.9.0
nodeSets:
- name: master
count: 1
config:
# most Elasticsearch configuration parameters are possible to set, e.g: node.attr.attr_name: attr_value
node.master: true
node.data: false
node.ingest: false
node.ml: false
# this allows ES to run on nodes even if their vm.max_map_count has not been increased, at a performance cost
node.store.allow_mmap: false
podTemplate:
metadata:
labels:
# additional labels for pods
foo: bar
spec:
containers:
- name: elasticsearch
# specify resource limits and requests
resources:
limits:
memory: 5Gi
cpu: 2
env:
- name: ES_JAVA_OPTS
value: "-Xms2g -Xmx2g"
- name: data-ingest
count: 1
config:
node.master: false
node.data: true
node.ingest: true
node.ml: true
# this allows ES to run on nodes even if their vm.max_map_count has not been increased, at a performance cost
node.store.allow_mmap: false
podTemplate:
metadata:
labels:
# additional labels for pods
foo: bar
spec:
containers:
- name: elasticsearch
# specify resource limits and requests
resources:
limits:
memory: 5Gi
cpu: 2
env:
- name: ES_JAVA_OPTS
value: "-Xms2g -Xmx2g"
# # request 2Gi of persistent data storage for pods in this topology element
volumeClaimTemplates:
- metadata:
name: elasticsearch-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
storageClassName: standard
We are not getting error, but showing the old storage value.
When we try to pass " allowVolumeExpansion: true" in elasticsearch-cr. Getting some error like that filed is unknown.
Am I reading correctly that your volume is already 100Gi?
In that case a resize to 3Gi as requested in your Elasticsearch spec would not work (you can only increase the size).
Or is the pvc manifest you're showing me corresponding to a different Elasticsearch manifest? This 3Gi/100Gi mismatch seems strange.
yes, I pasted different one. I deployed new elasticsearch cluster with new storage value . Even I am not able to increase the size. Is this because of statefulset?
@knagasri can you please paste an Elasticsearch manifest, along with a pvc manifest where that happens?
Also: which Kubernetes version are you running?
Sorry to reiterate my question: can you please also paste the PVC resource that matches that Elasticsearch manifest? (kubectl get pvc elasticsearch-data-elasticsearch-sample-es-data-ingest-0 -o yaml )
@knagasri the PVC says your volume is 100Gi. The Elasticsearch manifest requests a volume of 100Gi.
Are those the manifests where you tried to resize the volume?
{"log.level":"error","@timestamp":"2020-12-17T11:10:27.359Z","log.logger":"controller-runtime.controller","message":"Reconciler error","service.version":"1.1.0-29e7447f","service.type":"eck","ecs.version":"1.4.0","controller":"elasticsearch-controller","request":"es-upgrade1/elasticsearch-config","error":"StatefulSet.apps \"elasticsearch-config-es-data-ingest\" is invalid: spec: Forbidden: updates to statefulset spec for fields other than 'replicas', 'template', and 'updateStrategy' are forbidden","errorCauses":[{"error":"StatefulSet.apps \"elasticsearch-config-es-data-ingest\" is invalid: spec: Forbidden: updates to statefulset spec for fields other than 'replicas', 'template', and 'updateStrategy' are forbidden"}],"error.stack_trace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/zapr@v0.1.0/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.5.0/pkg/internal/controller/controller.go:258\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.5.0/pkg/internal/controller/controller.go:232\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.5.0/pkg/internal/controller/controller.go:211\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/pkg/mod/k8s.io/apimachinery@v0.17.2/pkg/util/wait/wait.go:152\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/apimachinery@v0.17.2/pkg/util/wait/wait.go:153\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/apimachinery@v0.17.2/pkg/util/wait/wait.go:88"}
Actually, I provided the correct manifest only. In the operator logs I was getting error that is like permission to change pvc. It is now sorted out. Thanks
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.