Moving to a different elasticsearch storage using ECK

Hello, today I wanted to switch the type of storage our Elasticsearch (deployed with ECK operator) is using. I found here, that it should be possible by creating a new nodeSet with desired new configuration and deleting the old one.

However after I did exactly that, I see just one failing pod from the newly created nodeSet that is reporting the following:

can't add node {elasticsearch-es-elastic-0}{<same_id>}{<some_params>}, found existing node {elasticsearch-es-default-1}{<same_id>}{<some_params>} with the same id but is a different node instance

The only thing I have changed in our Elasticsearch resource specification is nodeSet.[0].name and volumeClaimTemplate.[0].spec.selector.matchLabels .
What am I doing wrong? How can I move to a different storage while moving all existing data?

Thank you in advance :slight_smile:

Hey :wave:

Can you share your elasticsearch object (yaml) and StorageClass (if it's your production storageclass, don't forget to anonymize it) ?

Hello, sure here it is:
Elasticsearch

apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
  annotations:
    eck.k8s.elastic.co/license: basic
    eck.k8s.elastic.co/orchestration-hints: '{"no_transient_settings":true,"service_accounts":true,"desired_nodes":{"version":4,"hash":"1105308204"}}'
    elasticsearch.k8s.elastic.co/cluster-uuid: xxx
    meta.helm.sh/release-name: eck
    meta.helm.sh/release-namespace: eck-npr-svc-01
  creationTimestamp: "2024-06-20T07:19:09Z"
  generation: 11
  labels:
    app.kubernetes.io/instance: eck
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: eck-elasticsearch
    helm.sh/chart: eck-elasticsearch-0.11.0
  name: elasticsearch
  namespace: eck-npr-svc-01
  resourceVersion: "xxx"
  uid: xxx
spec:
  auth:
    roles:
    - secretName: eck-access-config
  http:
    service:
      metadata: {}
      spec: {}
    tls:
      certificate: {}
  image: docker.elastic.co/elasticsearch/elasticsearch:8.14.1
  monitoring:
    logs: {}
    metrics: {}
  nodeSets:
  - config:
      node.store.allow_mmap: false
    count: 3
    name: default
    podTemplate:
      metadata:
        creationTimestamp: null
      spec:
        affinity:
          nodeAffinity:
            requiredDuringSchedulingIgnoredDuringExecution:
              nodeSelectorTerms:
              - matchExpressions:
                - key: agentpool
                  operator: In
                  values:
                  - eck0001
        containers:
        - name: elasticsearch
          resources:
            limits:
              cpu: "2"
              memory: 8Gi
        tolerations:
        - effect: NoExecute
          key: DeployEck
          operator: Exists
    volumeClaimTemplates:
    - metadata:
        name: elasticsearch-data
      spec:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: 1Ti
        selector:
          matchLabels:
            app: eck-npr
        storageClassName: azurefile-csi
  transport:
    service:
      metadata: {}
      spec: {}
    tls:
      certificate: {}
      certificateAuthorities: {}
  updateStrategy:
    changeBudget: {}
  version: 8.14.1

Were are using statically created Persistant Volumes that look like this:

apiVersion: v1
kind: PersistentVolume
metadata:
  annotations:
    pv.kubernetes.io/bound-by-controller: "yes"
    pv.kubernetes.io/provisioned-by: file.csi.azure.com
  creationTimestamp: "2024-06-19T13:57:08Z"
  finalizers:
  - kubernetes.io/pv-protection
  labels:
    app: eck-npr
  name: eck-npr-fs-pv-01
  resourceVersion: "xxx"
  uid: xxx
spec:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 1Ti
  claimRef:
    apiVersion: v1
    kind: PersistentVolumeClaim
    name: elasticsearch-data-elasticsearch-es-default-2
    namespace: eck-npr-svc-01
    resourceVersion: "xxx"
    uid: xxx
  csi:
    driver: file.csi.azure.com
    nodeStageSecretRef:
      name: eck-npr-sa-access-key
      namespace: eck-npr-svc-01
    volumeAttributes:
      resourceGroup: xxx
      shareName: xxx
    volumeHandle: xxx
  mountOptions:
  - file_mode=0777
  - nobrl
  - mfsymlinks
  - gid=0
  - uid=0
  - dir_mode=0777
  - nosharesock
  - cache=strict
  persistentVolumeReclaimPolicy: Retain
  storageClassName: azurefile-csi
  volumeMode: Filesystem