Hi.
I'm exploring ECK to evalute how it works for our requirements.
I followed the documentation here to set up the Elasticsearch operator. CRDs and Operator installed fine.
Then used this YAML to create a single node Elasticsearch
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: standard
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
reclaimPolicy: Retain
allowVolumeExpansion: true
mountOptions:
- debug
volumeBindingMode: Immediate
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: elasticsearch-data
namespace: elastic-system
labels:
type: local
spec:
storageClassName: standard
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
---
# This sets up an Elasticsearch cluster with 3 nodes.
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: elasticsearch
namespace: elastic-system
spec:
version: 8.5.0
nodeSets:
- name: default
config:
node.roles: [ data, master ]
podTemplate:
metadata:
labels:
app: elasticsearch
spec:
initContainers:
- name: sysctl
securityContext:
privileged: true
command: ['sh', '-c', 'sysctl -w vm.max_map_count=262144']
containers:
- name: elasticsearch
# specify resource limits and requests
resources:
requests:
memory: 4Gi
cpu: 0.5
limits:
memory: 4Gi
cpu: 1
env:
- name: ES_JAVA_OPTS
value: "-Xms2g -Xmx2g"
count: 1
# request 10Gi of persistent data storage for pods in this topology element
volumeClaimTemplates:
- metadata:
name: elasticsearch-data
spec:
accessModes:
- ReadWriteOnce
storageClassName: standard
resources:
requests:
storage: 10Gi
# tls:
# certificate:
# secretName: elasticsearch-es-http-certs-public
# selfSignedCertificate:
# # add a list of SANs into the self-signed HTTP certificate
# subjectAltNames:
# - ip: 192.168.1.2
# - ip: 192.168.1.3
# - dns: elasticsearch-sample.example.com
# certificate:
# # provide your own certificate
# secretName: my-cert
now when i run this command
kubectl apply -f elasticsearch.yml
Elastic search fails to start with error -
failed to obtain node locks, tried [/usr/share/elasticsearch/data]; maybe these locations are not writable or multiple nodes were started on the same data path
how do i get past the error? and also how do i check that the path is accessible (write permissions) for the elasticsearch/data folder ? I'm new to kubernetes and also linux.
BTW i'm using kubernetes single-node cluster available in docker desktop on windows.