How to configure nfs in ECK k8s Rancher

Hi
I have a problem setting a volume in the all-in-one.yaml file. I am trying to map an ntfs to persist the external volume and not use the Elastic default.
Could someone help me in this case.

My file.

apiVersion: apps/v1
kind: StatefulSet
metadata:
name: elastic-operator
namespace: elasticsearch-funcional
labels:
control-plane: elastic-operator
spec:
selector:
matchLabels:
control-plane: elastic-operator
serviceName: elastic-operator
template:
metadata:
labels:
control-plane: elastic-operator
spec:
serviceAccountName: elastic-operator
containers:
- image: docker.elastic.co/eck/eck-operator:1.0.0-beta1
volumeMounts:
- name: elasticsearch-data
mountPath: /usr/share/elasticsearch/data
name: manager
args: ["manager", "--operator-roles", "all", "--enable-debug-logs=true"]
env:
- name: OPERATOR_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: WEBHOOK_SECRET
value: webhook-server-secret
- name: WEBHOOK_PODS_LABEL
value: elastic-operator
- name: OPERATOR_IMAGE
value: docker.elastic.co/eck/eck-operator:1.0.0-beta1
resources:
limits:
cpu: 1
memory: 150Mi
requests:
cpu: 100m
memory: 50Mi
ports:
- containerPort: 9876
name: webhook-server
protocol: TCP
terminationGracePeriodSeconds: 10
volumes:
- name: elasticsearch-data
nfs:
server: 192.168.134.137
path: /nfs/dev/elastick8

Hey Marinho,
It's hard to tell because it looks like the formatting was lost in the copy and paste (you can use code blocks with three `s btw), but I think multiple concepts might be conflated. The actual beta operator stateful set does not need persistent storage. The volume claim templates inside of the Elasticsearch resource (separate from the operator statefulset) can be configured like so: https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-volume-claim-templates.html

In other words: it looks like you configured the PersistentVolumes in the operator manifest, instead of configuring it in the Elasticsearch resource.

Thank you Anya.

I create the configuration nfs now on elasticsearch resource, look my configurition pv,pv and of elasticsearch.

Elasticsearch#

apiVersion: elasticsearch.k8s.elastic.co/v1beta1
kind: Elasticsearch
metadata:
name: elasticsearch
namespace: elasticsearch-funcional
spec:
version: 7.5.1
nodeSets:

  • name: es-cluster
    count: 1
    config:
    node.master: true
    node.data: false
    node.ingest: false
  • name: es-data
    count: 1
    config:
    node.master: false
    node.data: true
    node.ingest: false
    podTemplate:
    spec:
    containers:
    - name: elasticsearch
    volumeMounts:
    - name: elasticsearch-data
    mountPath: /usr/share/elasticsearch/data
    env:
    - name: node.name
    valueFrom:
    fieldRef:
    fieldPath: metadata.name
    - name: discovery.zen.ping.unicast.hosts
    value: "es-cluster-0.elasticsearch,es-cluster-1.elasticsearch"
    - name: discovery.zen.minimum_master_nodes
    value: "2"
    - name: ES_JAVA_OPTS
    value: "-Xms2g -Xmx2g"
    initContainers:
    - name: sysctl
    securityContext:
    privileged: true
    command: ['sh', '-c', 'sysctl -w vm.max_map_count=262144']
    volumes:
    - name: elasticsearch-data
    persistentVolumeClaim:
    claimName: elasticsearch-data-elasticsearch-es-es-cluster-0

#PV


apiVersion: v1
kind: PersistentVolume
metadata:
name: elasticsearch-data-elasticsearch-es-es-cluster-0
namespace: elasticsearch-funcional
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Recycle
nfs:
server: 192.168.134.137
path: /nfs/dev/elastick8s


apiVersion: v1
kind: PersistentVolume
metadata:
name: elasticsearch-data-elasticsearch-es-es-data-0
namespace: elasticsearch-funcional
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Recycle
nfs:
server: 192.168.134.137
path: /nfs/dev/elastick8s

#PVC


apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: elasticsearch-data-elasticsearch-es-es-cluster-0
namespace: elasticsearch-funcional
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 4Gi


apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: elasticsearch-data-elasticsearch-es-es-data-0
namespace: elasticsearch-funcional
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 4Gi
###########

I am trying to create a separate master and data cluster. But I'm getting the error that is unable to bound pvc mapping.

#ERROR#

Warning BackOff pod/elasticsearch-es-es-data-0 Back-off restarting failed container

@Marinho_DevOPs can you arrange the formatting in your message so the yaml manifests appear with the correct indentation? I think you can wrap them in a Markdown code block.

@sebgl i can send my file to you for mail.

@sebgl and @Anya_Sabo

When I am trying to start the Elasticsearch master and date cluster I get the unbound persistentvolumeclaims error. How do i get pvc in the same pods as elasticsearch eck k8s. He functional in hospath normally when i create nfs he give me unbound persistentvolumeclaims error. Is it possible to have us claimName on the same pod?

  • Unless you have a good reason to do so (like no volume provisioner), you should probably not define PersistentVolumeClaims and mount them in the spec yourself. Instead, you can just declare the claim template. ECK and the StatefulSet controller will take care of creating the PersistentVolumeClaim based on that template automatically. See this doc.

  • You don't have to set discovery.zen.ping.unicast.hosts, discovery.zen.minimum_master_nodes and node.name. ECK is already doing that for you.

@sebgl thank you for informtion.
But is possible to enable nfs to ECK

ECK is compatible with any PersistentVolume provider implementation.
If you don't have a dynamic provisioner for NFS volumes, you should be able to manually create your own PersistentVolumes with an NFS spec. This video gives an example.

However, we strongly recommend you not to use NFS with Elasticsearch. It usually leads to bad performance.

Hi,
Is it suggested that using a trident NetApp provisioner instead of Local-disk or NFS share ? What do you suggest for on-premise provisioner?

Thanks in advance for your help.
-Mahoni