Hi,
I am trying to setup elastic search multi node cluster using eck operator and pod going into CrashLoopBackOff with following error
readiness probe failed: {"timestamp": "2021-03-17T18:33:03+05:30", "message": "readiness probe failed", "curl_rc": "7"}
While checking eck operator logs it says
kubectl -n elastic logs -f pod/elastic-operator-0
{"log.level":"error","@timestamp":"2021-03-17T18:31:46.622+0530","log.logger":"annotation","message":"failed to update pod annotation","service.version":"1.4.0+4aff0b98","service.type":"eck","ecs.version":"1.4.0","annotation":"update.k8s.elastic.co/timestamp","namespace":"elastic","pod_name":"mysample-es-data-2","error":"Pod "mysample-es-data-2" is invalid: spec: Forbidden: pod updates may not change fields other than spec.containers[*].image
, spec.initContainers[*].image
, spec.activeDeadlineSeconds
or spec.tolerations
(only additions to existing tolerations)\n core.PodSpec{\n \t... // 10 identical fields\n \tAutomountServiceAccountToken: &false,\n \tNodeName: "sparrow-dev-operator-dev-1-nodes-1-8288959",\n \tSecurityContext: &core.PodSecurityContext{\n \t\t... // 11 identical fields\n \t\tFSGroupChangePolicy: nil,\n \t\tSysctls: nil,\n- \t\tSeccompProfile: nil,\n+ \t\tSeccompProfile: &core.SeccompProfile{Type: "RuntimeDefault"},\n \t},\n \tImagePullSecrets: nil,\n \tHostname: "mysample-es-data-2",\n \t... // 15 identical fields\n }\n","error.stack_trace":"github.com/elastic/cloud-on-k8s/pkg/controller/common/annotation.MarkPodAsUpdated\n\t/go/src/github.com/elastic/cloud-on-k8s/pkg/controller/common/annotation/pod.go:73\ngithub.com/elastic/cloud-on-k8s/pkg/controller/elasticsearch/certificates/transport.reconcileNodeSetTransportCertificatesSecrets\n\t/go/src/github.com/elastic/cloud-on-k8s/pkg/controller/elasticsearch/certificates/transport/reconcile.go:173\ngithub.com/elastic/cloud-on-k8s/pkg/controller/elasticsearch/certificates/transport.ReconcileTransportCertificatesSecrets\n\t/go/src/github.com/elastic/cloud-on-k8s/pkg/controller/elasticsearch/certificates/transport/reconcile.go:61\ngithub.com/elastic/cloud-on-k8s/pkg/controller/elasticsearch/certificates.Reconcile\n\t/go/src/github.com/elastic/cloud-on-k8s/pkg/controller/elasticsearch/certificates/reconcile.go:102\ngithub.com/elastic/cloud-on-k8s/pkg/controller/elasticsearch/driver.(*defaultDriver).Reconcile\n\t/go/src/github.com/elastic/cloud-on-k8s/pkg/controller/elasticsearch/driver/driver.go:134\ngithub.com/elastic/cloud-on-k8s/pkg/controller/elasticsearch.(*ReconcileElasticsearch).internalReconcile\n\t/go/src/github.com/elastic/cloud-on-k8s/pkg/controller/elasticsearch/elasticsearch_controller.go:290\ngithub.com/elastic/cloud-on-k8s/pkg/controller/elasticsearch.(*ReconcileElasticsearch).Reconcile\n\t/go/src/github.com/elastic/cloud-on-k8s/pkg/controller/elasticsearch/elasticsearch_controller.go:199\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.6.3/pkg/internal/controller/controller.go:244\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.6.3/pkg/internal/controller/controller.go:218\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.6.3/pkg/internal/controller/controller.go:197\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/pkg/mod/k8s.io/apimachinery@v0.18.14/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/pkg/mod/k8s.io/apimachinery@v0.18.14/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/apimachinery@v0.18.14/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/apimachinery@v0.18.14/pkg/util/wait/wait.go:90"}
Sample Yaml file:
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: mysample
spec:
version: 7.11.1
nodeSets:
- name: masters
count: 3
config:
node.roles: ["master"]
node.store.allow_mmap: false
- name: data
count: 5
config:
node.roles: ["data"]
volumeClaimTemplates:
- metadata:
name: quickstart-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: local-hdd-lv
podTemplate:
spec:
securityContext:
runAsUser: 1000
containers:
- name: elasticsearch
resources:
requests:
cpu: 1
memory: 2Gi
limits:
cpu: 1
memory: 2Gi
Note: Same works when nodesets.count = 1 (Following is the config)
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: quickstart
spec:
version: 7.11.1
nodeSets:
- name: default
count: 1
config:
node.store.allow_mmap: false
volumeClaimTemplates:
- metadata:
name: elasticsearch-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: local-hdd-lv
podTemplate:
spec:
securityContext:
runAsUser: 1000
containers:
- name: elasticsearch
resources:
requests:
cpu: 4
memory: 8Gi
limits:
cpu: 4
memory: 8Gi