I had deployed eck-1.3.0 and elasticsearch 7.9.0 in GKE with 2 data-ingest and 3 master pods. Below is the one when I do get pods:
kubectl -n <ns> get po
NAME READY STATUS RESTARTS AGE
elastic-operator-0 1/1 Running 0 36m
elasticsearch-config-es-data-ingest-0 1/1 Running 0 33m
elasticsearch-config-es-data-ingest-1 1/1 Running 0 33m
elasticsearch-config-es-master-0 1/1 Running 0 33m
elasticsearch-config-es-master-1 1/1 Running 0 33m
elasticsearch-config-es-master-2 1/1 Running 0 33m
Below is the response when I checked the elasticsearch nodes status
[root@elasticsearch-config-es-data-ingest-0 elasticsearch]# curl -u "es-admin:es-admin" -k -X GET "https://elasticsearch-config-es-http:9200/_nodes/stats?pretty"
{
"_nodes" : {
"total" : 5,
"successful" : 5,
"failed" : 0
},
"cluster_name" : "elasticsearch-config"
I forcibly down one of the elasticsearch node down, so that total nodes would be 5, active nodes should be 4 and failed nodes should be 1.
kubectl -n <ns> get po
NAME READY STATUS RESTARTS AGE
elastic-operator-0 1/1 Running 0 58m
elasticsearch-config-es-data-ingest-0 1/1 Running 0 55m
elasticsearch-config-es-data-ingest-1 0/1 ErrImagePull 0 55m
elasticsearch-config-es-master-0 1/1 Running 0 55m
elasticsearch-config-es-master-1 1/1 Running 0 55m
elasticsearch-config-es-master-2 1/1 Running 0 55m
But I can see the status as below inside the pod:
curl -u "un:pwd" -k -X GET "https://elasticsearch-config-es-http:9200/_nodes/stats?pretty"
{
"_nodes" : {
"total" : 4,
"successful" : 4,
"failed" : 0
},
"cluster_name" : "elasticsearch-config"
Is this the exact behaviour?