Elasticsearch snapshot to remote nas via nfs

Hello,

I am running an Elasticsearch cluster on Kubernetes using the Elastic Cloud on Kubernetes (ECK) operator.

Currently, my Elasticsearch data volume is reaching 80% usage, so I would like to configure a snapshot repository of type fs pointing to a remote NAS (NFS) that is exposed externally.

My plan is:

Mount the NFS path (/export/elasticsearch_snapshots) into my Elasticsearch node under /mnt/snapshots.

Update the Elasticsearch CRD to include path.repo: ["/mnt/snapshots"].

Register the repository via the _snapshot API.

My questions are:

If I update the CRD to add the NFS volume and path.repo, will my existing Elasticsearch data (stored in the main PVC defined by volumeClaimTemplates) remain safe and unaffected?

Is this the correct and recommended way to configure an external NFS repository for snapshots in ECK?

apiVersion: elasticsearch.k8s.elastic.co/v1

kind: Elasticsearch

metadata:

name: es-cluster

namespace: elastic-system

spec:

version: 8.15.0

nodeSets:

  • name: default

    count: 1

    config:

    node.store.allow_mmap: false

    path.repo: ["/mnt/snapshots"]

    podTemplate:

    spec:

    containers:
    
    - name: elasticsearch
    
      volumeMounts:
    
      - name: snapshot-repo
    
        mountPath: /mnt/snapshots
    
    volumes:
    
    - name: snapshot-repo
    
      nfs:
    
        server: @Nasip
    
        path: /export/elasticsearch_snapshots
    

    volumeClaimTemplates:

    • metadata:

      name: elasticsearch-data

      spec:

      accessModes: ["ReadWriteOnce"]

      resources:

      requests:
      
        storage: 50Gi
      

Are there best practices for handling incremental snapshots with this setup (cleanup policy, automation with CronJobs, etc.)?

Thank you for your help!