Snapshot error: Unable to find client with name [default]

Have problems to register an Azure repository in ECK.
There is an Azure storage account with elasticsearch-snapshots container.

Account name and the key are stored as a secret named secrets-store-storageaccount in K8S . Secret has 2 fields:

  • azure.client.default.account
  • azure.client.default.key

Fields have corresponded values.

The secret secrets-store-storageaccount is injected to Elasticseach like this:

  version: 7.11.1
  - secretName: secrets-store-storageaccount

However when request:

PUT /_snapshot/heartbeat
    "type": "azure",
  "settings": {
    "container": "elasticsearch-snapshots",
    "chunk_size": "32MB",
    "compress": true

The response has errors:

  "error" : {
    "root_cause" : [
        "type" : "repository_verification_exception",
        "reason" : "[heartbeat] path  is not accessible on master node"
    "type" : "repository_verification_exception",
    "reason" : "[heartbeat] path  is not accessible on master node",
    "caused_by" : {
      "type" : "i_o_exception",
      "reason" : "Unable to write blob tests-LhO5OGcHS1CsbSLus2VG2g/master.dat",
      "caused_by" : {
        "type" : "settings_exception",
        "reason" : "Unable to find client with name [default]"
  "status" : 500

What is wrong?

It looks like operator can not restart the Elasticsearch.
Here is the log from operator after changes in manifest:

{:"driver","message":"Cannot restart some nodes for upgrade at this time","service.version":"1.4.0+4aff0b98","service.type":"eck","ecs.version":"1.4.0","namespace":"healthcheck","es_name":"elasticsearch","failed_predicates":{"if_yellow_only_restart_upgrading_nodes_with_unassigned_replicas":["elasticsearch-es-elasticsearch-0"]}}
{:"elasticsearch-controller","message":"Ending reconciliation run","service.version":"1.4.0+4aff0b98","service.type":"eck","ecs.version":"1.4.0","iteration":5509,"namespace":"healthcheck","es_name":"elasticsearch","took":0.371634186}
{:"elasticsearch-controller","message":"Starting reconciliation run","service.version":"1.4.0+4aff0b98","service.type":"eck","ecs.version":"1.4.0","iteration":5510,"namespace":"healthcheck","es_name":"elasticsearch"}
{:"zen2","message":"Ensuring no voting exclusions are set","service.version":"1.4.0+4aff0b98","service.type":"eck","ecs.version":"1.4.0","namespace":"healthcheck","es_name":"elasticsearch"}
{:"migrate-data","message":"Setting routing allocation excludes","service.version":"1.4.0+4aff0b98","service.type":"eck","ecs.version":"1.4.0","namespace":"healthcheck","es_name":"elasticsearch","value":"none_excluded"}

The issue with "[client with name [default]" was because Elasticsearch instance was not restarted by operator after after keystore update.
And operator could not update the instance since it was the only one.
It looks like the ECK update policy is configured to keep at least one pod. Which is an issue with installations with a single Elasticsearch pod.
After adding a second pod to Elasticsearch operator everything works fine.


Keystore resources are watched by the operator. Any change to the content of the underlying Secret should trigger a restart of the cluster, including the only node of a single node cluster.

It is likely that an other reason was preventing the node to be restarted. You should be able to find more information about that reason in the operator logs.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.