Retention lease background sync failed - help understand this log

Hi Guys,

New to this technology, I've tried to use elasticsearch self-managed for our application in our organization.

I'm facing this "retention lease background sync failed" issue for quite some time.
I tried to get to understand about this log, but couldn't get an insight from this log.

Help me understand what was the issue in this concept and provide an solution to understand this log better.

As this was in testing phase, I had to delete entire ES resource and re-deploy from scratch due to this issue. I know this won't solve the issue yet I did so to debug if it was from uploading huge data or to find out the real cause.

Note: I've deployed the Elasticsearch and kibana in the GKE - GCP (self-managed). The log uploaded is also from the pod log of ES in GKE.

Which version of Elasticsearch are you using?

Elasticsearch version: 8.12.0
Referred and used the documentation - self-managed elasticsearch

FYI: During the process the we faced health check issue in GCP GKE Ingress LB.
So modified the installing definition:

Allowed anonymous which resolved this issue. I understand this could be potential security but we are using for testing purposes currently. So we currently ignored this warning.

Using anonymous user resolved only the 'health check' issue faced during the process in GCP LB.

@Christian_Dahlqvist Do you have any idea about this log ?

The first entry says it is not able to elect a master. What is the size and configuration of the cluster?

What is the output of the cat nodes API?

@Christian_Dahlqvist

I've just followed the exact configuration given in the documentation made no changes in the size and memory.

From the yaml configuration hosted in GKE:
image

How many nodes does the cluster have?

Also please do not post images of text as they can be very hard to read and can not be searched.

I guess one. That is the count got from the official documentation and used the same.

@Christian_Dahlqvist Any updates, regarding ?

No, i do not know why this would happen with just one node, so will need to leave it for someone else.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.