Hi there so I've gotten the single instance configuration to work. And Kibana is able to connect to it.
apiVersion: elasticsearch.k8s.elastic.co/v1beta1
kind: Elasticsearch
metadata:
name: elastic
spec:
version: 7.4.2
nodeSets:
- name: default
count: 1
config:
node.master: true
node.data: true
node.ingest: true
node.store.allow_mmap: false
http:
tls:
selfSignedCertificate:
disabled: true
However as soon as I try to add more, in order to scale out reads e.g.:
apiVersion: elasticsearch.k8s.elastic.co/v1beta1
kind: Elasticsearch
metadata:
name: elastic
spec:
version: 7.4.2
nodeSets:
- name: default
count: 3
config:
node.master: true
node.data: true
node.ingest: true
node.store.allow_mmap: false
http:
tls:
selfSignedCertificate:
disabled: true
apiVersion: elasticsearch.k8s.elastic.co/v1beta1
kind: Elasticsearch
metadata:
name: elastic
spec:
version: 7.4.2
nodeSets:
- name: default
count: 1
config:
node.master: true
node.data: true
node.ingest: true
node.store.allow_mmap: false
- name: data
count: 2
config:
node.master: false
node.data: true
node.ingest: false
node.store.allow_mmap: false
http:
tls:
selfSignedCertificate:
disabled: true
All of my Elasticsearch pods start crashing/erroring out, and Kibana is not able to connect. If I start with 1 Elasticsearch + 1 Kibana it is able to connect, then when I scale up to 3 Elasticsearch, the pods stay running but eventually Kibana shows everything in a red state and it no longer works. I am running on an 8 vCPU x 32GB mem + 4 vCPU x 16GB mem + 4 vCPU x 16GB mem, so I believe I have enough resources.
Any suggestions?