Deploy ECK with local storage

I am trying to deploy eck. My storage is in local disk. I wanted to create 3 master node cluster to begin with. My problem is do I need to
create persistent volume and persistent volume claim for each master ?
This is my storage class

kind: StorageClass
  apiVersion: storage.k8s.io/v1
  metadata:
    name: local-storage
  provisioner: kubernetes.io/no-provisioner
  volumeBindingMode: WaitForFirstConsumer

This is my pv

apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: pv-test
    spec:
      capacity:
        storage: 2Gi
      accessModes:
      - ReadWriteOnce
      persistentVolumeReclaimPolicy: Retain
      storageClassName: local-storage-master
      local:
        path: /data/
      nodeAffinity:
        required:
          nodeSelectorTerms:
          - matchExpressions:
            - key: kubernetes.io/hostname
              operator: In
              values:
              - node-a
              - node-b
              - node-c

This is my ES cluster

apiVersion: elasticsearch.k8s.elastic.co/v1
  kind: Elasticsearch
  metadata:
    name: master-node-cluster
  spec:
    version: 7.7.0
    nodeSets:
    - name: default
      count: 3
      volumeClaimTemplates:
      - metadata:
          name: elasticsearch-data
        spec:
          accessModes:
          - ReadWriteOnce
          resources:
            requests:
              storage: 2Gi
          storageClassName: local-storage
      config:
        node.master: true
        node.data: true
        node.ingest: true
        node.store.allow_mmap: false

The problem is only one master nodes comes online. Other two wait in Pending states since they don't have pv. One master get pv and pvc. If this is
the case then do i need to create yaml for each master specifying different storage-class ?

1 Like

Solution is to create pv for each node. I created 3 storage classes for master data and ingest. Then created pv for each node with relevant storage class.

I had exactly the same issue with you. I followed the tutorials at https://medium.com/@iced_burn/elasticstack-in-kubernetes-57c37306a7bd. I think it does not make it through. Can I create 3 nodes in the same yaml file? Or I should create each node one by one?

Could you please paste your answer?

Thanks & best regards,
Xiaoguo

First create storage classes

create local storage class
cat <<EOF | kubectl apply -f -
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: local-storage-master
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
EOF
-----
cat <<EOF | kubectl apply -f -
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: local-storage-data
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
EOF
------
cat <<EOF | kubectl apply -f -
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: local-storage-ingest
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
EOF

Then create pv for each node. Following for data node 1

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: PersistentVolume
metadata:
  name: es-data-pv-0
spec:
  capacity:
    storage: 2Gi
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: local-storage-data
  local:
    path: /data/
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - worker1
          - worker2
          - worker3
 EOF

You need to create this PV for all the data nodes. If needed pv can be created for master as well.

There should be no need to create different storage classes for the different node types unless this is really what you wanted: e.g. cheap spinning disks for warm data nodes, fast solid state disks for hot data nodes and masters etc.

But reading through your post I don't think this is the case here. Have a look at this answer from @sebgl which lists a few options for local volume provisioning that will take the burden of managing persistent volumes manually from you: