Good morning.
After creating the operator and deploying 3 Master nodes + 3 multi-role nodes of ES with the following manifest:
cat <<EOF | oc apply -f -
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: elkcluster
spec:
version: 8.2.3
nodeSets:
- name: masters
config:
node.roles: ["master"]
node.store.allow_mmap: false
count: 3
- name: data
count: 3
config:
node.roles: ["data", "ingest", "ml", "transform"]
EOF
And after deploying Kibana with this other manifest:
cat <<EOF | oc apply -f -
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
name: kibana
spec:
version: 8.2.3
count: 2
elasticsearchRef:
name: elkcluster
EOF
Which seems to work well:
$ oc get all
NAME READY STATUS RESTARTS AGE
pod/elkcluster-es-data-0 1/1 Running 0 4m35s
pod/elkcluster-es-data-1 1/1 Running 0 4m35s
pod/elkcluster-es-data-2 1/1 Running 0 4m35s
pod/elkcluster-es-masters-0 1/1 Running 0 4m35s
pod/elkcluster-es-masters-1 1/1 Running 0 4m35s
pod/elkcluster-es-masters-2 1/1 Running 0 4m35s
pod/kibana-kb-6f8fb7d65b-dsvdn 1/1 Running 0 84s
pod/kibana-kb-6f8fb7d65b-gpmvx 1/1 Running 0 84sNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/elkcluster-es-data ClusterIP None 9200/TCP 4m36s
service/elkcluster-es-http ClusterIP X.X.X.X 9200/TCP 4m37s
service/elkcluster-es-internal-http ClusterIP X.X.X.X 9200/TCP 4m37s
service/elkcluster-es-masters ClusterIP X.X.X.X 9200/TCP 4m36s
service/elkcluster-es-transport ClusterIP X.X.X.X 9300/TCP 4m37s
service/kibana-kb-http ClusterIP X.X.X.X 5601/TCP 85sNAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/kibana-kb 2/2 2 2 84sNAME DESIRED CURRENT READY AGE
replicaset.apps/kibana-kb-6f8fb7d65b 2 2 2 84sNAME READY AGE
statefulset.apps/elkcluster-es-data 3/3 4m35s
statefulset.apps/elkcluster-es-masters 3/3 4m36s0008525@PC29AKRP MINGW64 ~
$ oc get elastic
NAME HEALTH NODES VERSION PHASE AGE
elasticsearch.elasticsearch.k8s.elastic.co/elkcluster green 6 8.2.3 Ready 5m10sNAME HEALTH NODES VERSION AGE
kibana.kibana.k8s.elastic.co/kibana green 2 8.2.3 117s
I tried to deploy beats by using the 'quick start' manifest.
https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-beat-quickstart.html
But I know I did some changes as I added extra ES nodes with different roles and one extra Kibana pod in my previous manifests.
This is the manifest I should use but when I use it, it doesn't work.
cat <<EOF | kubectl apply -f -
apiVersion: beat.k8s.elastic.co/v1beta1
kind: Beat
metadata:
name: quickstart
spec:
type: filebeat
version: 8.2.3
elasticsearchRef:
name: quickstart
config:
filebeat.inputs:
- type: container
paths:
- /var/log/containers/*.log
daemonSet:
podTemplate:
spec:
dnsPolicy: ClusterFirstWithHostNet
hostNetwork: true
securityContext:
runAsUser: 0
containers:
- name: filebeat
volumeMounts:
- name: varlogcontainers
mountPath: /var/log/containers
- name: varlogpods
mountPath: /var/log/pods
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
volumes:
- name: varlogcontainers
hostPath:
path: /var/log/containers
- name: varlogpods
hostPath:
path: /var/log/pods
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
EOF
Once I use it the beat service is in red status forever.
Can someone please tell me what changes should I made to make it work? I'm sorry of my lack of knowledge in K8's.
Thank you very much to all and regards.
Carlos T.