Automatically elastic operator pod went down and up

Deployed Elasticsearch cluster in GCP. Without making any changes, operator pod goes down and after sometime pods get restarted. Elastic operator version is 1.1.0 . Below is logs of elastic operator pod .

{"log.level":"error","@timestamp":"2020-12-16T04:18:29.237Z","log.logger":"driver","message":"Could not update remote clusters in Elasticsearch settings","service.version":"1.1.0-29e7447f","service.type":"eck","ecs.version":"1.4.0","namespace":"ns","es_name":"es","error":"Operation cannot be fulfilled on \"es\": the object has been modified; please apply your changes to the latest version and try again","error.stack_trace":"*zapLogger). 

Can anyone know why it is happening ?

Hey @knagasri, thanks for your question.

How are you checking that the operator Pod restarted? Can you provide logs from before the crash? You can use kubectl logs -n elastic-system elastic-operator-0 --previous to do that.

The log you've pasted indicates that there was a conflict updating a resource - when the operator tried to update, the resource was already at a higher version. It's expected to see this log appear sporadically.

kubectl --kubeconfig=e -n logs elastic-operator-0 --previous

Error from server (BadRequest): previous terminated container "manager" in pod "elastic-operator-0" not found

It seems there was no previous instance of the operator and Operation cannot be fulfilled on... log line isn't an indication of a restart. Is it possible that it didn't restart at all?

kubectl describe pod/elastic-operator-0 -n elastic-system would help to see if the OOMKiller is the culprit.