License error

Hi guys,

I'm receiving this error on my ECK:
Could not update cluster license: failed to revert to basic: 503 Service Unavailable:

I installed it using the basic confirmations on the Quick start page.

When I try to get the license from kibana it works and the license is valid. Any guess?

This is just a warning. What it means is that the operator tried to make an API call to Elasticsearch (here in order to set the license) and Elasticsearch responded with a HTTP 503.

While error message is just a warning it still means your Elasticsearch cluster was unavailable at this point in time. Can you check e.g. with kubectl get elasticsearch if your cluster is available now? You said you were accessing the license UI in Kibana and that worked, which indicates that Elasticsearch is available (again), if so this could have been a transient issue.

Hey @pebrc

Actually my Elasticsearch is red cause that.

kubectl get elasticsearch -n logging
NAME                 HEALTH   NODES   VERSION   PHASE   AGE
elasticsearch-logs   red      3       7.6.0     Ready   3d


kubectl get pods -n logging
NAME                                         READY   STATUS    RESTARTS   AGE
elasticsearch-logs-es-elasticsearch-logs-0   1/1     Running   0          3d20h
elasticsearch-logs-es-elasticsearch-logs-1   1/1     Running   0          3d20h
elasticsearch-logs-es-elasticsearch-logs-2   1/1     Running   0          3d20h
kibana-logs-kb-7fb98c6685-8fvlq              1/1     Running   0          3d2h

And here the describe:

Events:
  Type     Reason      Age                     From                      Message
  ----     ------      ----                    ----                      -------
  Warning  Unexpected  99s (x1418 over 2d18h)  elasticsearch-controller  Could not update cluster license: failed to revert to basic: 503 Service Unavailable:

So that explains it. The license message is just a symptom of your cluster being unavailable. The next step is to find out why your cluster is red.

We have a few troubleshooting tips here: https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-troubleshooting.html

It would be interesting for example to look at the Elasticsearch logs to find out what happened.

And here my kibana logs:

{"type":"log","@timestamp":"2020-02-24T13:12:06Z","tags":["error","plugins","taskManager","taskManager"],"pid":6,"message":"Failed to poll for work: Authorization Exception :: {\"path\":\"/.kibana_task_manager/_update_by_query\",\"query\":{\"ignore_unavailable\":true,\"refresh\":true,\"max_docs\":10,\"conflicts\":\"proceed\"},\"body\":\"{\\\"query\\\":{\\\"bool\\\":{\\\"must\\\":[{\\\"term\\\":{\\\"type\\\":\\\"task\\\"}},{\\\"bool\\\":{\\\"must\\\":[{\\\"bool\\\":{\\\"should\\\":[{\\\"bool\\\":{\\\"must\\\":[{\\\"term\\\":{\\\"task.status\\\":\\\"idle\\\"}},{\\\"range\\\":{\\\"task.runAt\\\":{\\\"lte\\\":\\\"now\\\"}}}]}},{\\\"bool\\\":{\\\"must\\\":[{\\\"bool\\\":{\\\"should\\\":[{\\\"term\\\":{\\\"task.status\\\":\\\"running\\\"}},{\\\"term\\\":{\\\"task.status\\\":\\\"claiming\\\"}}]}},{\\\"range\\\":{\\\"task.retryAt\\\":{\\\"lte\\\":\\\"now\\\"}}}]}}]}},{\\\"bool\\\":{\\\"should\\\":[{\\\"exists\\\":{\\\"field\\\":\\\"task.schedule\\\"}},{\\\"bool\\\":{\\\"must\\\":[{\\\"term\\\":{\\\"task.taskType\\\":\\\"actions:.server-log\\\"}},{\\\"range\\\":{\\\"task.attempts\\\":{\\\"lt\\\":1}}}]}},{\\\"bool\\\":{\\\"must\\\":[{\\\"term\\\":{\\\"task.taskType\\\":\\\"actions:.slack\\\"}},{\\\"range\\\":{\\\"task.attempts\\\":{\\\"lt\\\":1}}}]}},{\\\"bool\\\":{\\\"must\\\":[{\\\"term\\\":{\\\"task.taskType\\\":\\\"actions:.email\\\"}},{\\\"range\\\":{\\\"task.attempts\\\":{\\\"lt\\\":1}}}]}},{\\\"bool\\\":{\\\"must\\\":[{\\\"term\\\":{\\\"task.taskType\\\":\\\"actions:.index\\\"}},{\\\"range\\\":{\\\"task.attempts\\\":{\\\"lt\\\":1}}}]}},{\\\"bool\\\":{\\\"must\\\":[{\\\"term\\\":{\\\"task.taskType\\\":\\\"actions:.pagerduty\\\"}},{\\\"range\\\":{\\\"task.attempts\\\":{\\\"lt\\\":1}}}]}},{\\\"bool\\\":{\\\"must\\\":[{\\\"term\\\":{\\\"task.taskType\\\":\\\"actions:.webhook\\\"}},{\\\"range\\\":{\\\"task.attempts\\\":{\\\"lt\\\":1}}}]}},{\\\"bool\\\":{\\\"must\\\":[{\\\"term\\\":{\\\"task.taskType\\\":\\\"alerting:siem.signals\\\"}},{\\\"range\\\":{\\\"task.attempts\\\":{\\\"lt\\\":3}}}]}},{\\\"bool\\\":{\\\"must\\\":[{\\\"term\\\":{\\\"task.taskType\\\":\\\"vis_telemetry\\\"}},{\\\"range\\\":{\\\"task.attempts\\\":{\\\"lt\\\":3}}}]}},{\\\"bool\\\":{\\\"must\\\":[{\\\"term\\\":{\\\"task.taskType\\\":\\\"lens_telemetry\\\"}},{\\\"range\\\":{\\\"task.attempts\\\":{\\\"lt\\\":3}}}]}}]}}]}}]}},\\\"sort\\\":{\\\"_script\\\":{\\\"type\\\":\\\"number\\\",\\\"order\\\":\\\"asc\\\",\\\"script\\\":{\\\"lang\\\":\\\"painless\\\",\\\"source\\\":\\\"\\\\nif (doc['task.retryAt'].size()!=0) {\\\\n return doc['task.retryAt'].value.toInstant().toEpochMilli();\\\\n}\\\\nif (doc['task.runAt'].size()!=0) {\\\\n return doc['task.runAt'].value.toInstant().toEpochMilli();\\\\n}\\\\n \\\"}}},\\\"seq_no_primary_term\\\":true,\\\"script\\\":{\\\"source\\\":\\\"ctx._source.task.ownerId=params.ownerId; ctx._source.task.status=params.status; ctx._source.task.retryAt=params.retryAt;\\\",\\\"lang\\\":\\\"painless\\\",\\\"params\\\":{\\\"ownerId\\\":\\\"kibana:4770de5f-5e4b-49af-90d3-ada7015b5d30\\\",\\\"status\\\":\\\"claiming\\\",\\\"retryAt\\\":\\\"2020-02-24T13:12:36.013Z\\\"}}}\",\"statusCode\":403,\"response\":\"{\\\"took\\\":1,\\\"timed_out\\\":false,\\\"total\\\":2,\\\"updated\\\":0,\\\"deleted\\\":0,\\\"batches\\\":1,\\\"version_conflicts\\\":0,\\\"noops\\\":0,\\\"retries\\\":{\\\"bulk\\\":0,\\\"search\\\":0},\\\"throttled_millis\\\":0,\\\"requests_per_second\\\":-1.0,\\\"throttled_until_millis\\\":0,\\\"failures\\\":[{\\\"index\\\":\\\".kibana_task_manager_1\\\",\\\"type\\\":\\\"_doc\\\",\\\"id\\\":\\\"task:oss_telemetry-vis_telemetry\\\",\\\"cause\\\":{\\\"type\\\":\\\"cluster_block_exception\\\",\\\"reason\\\":\\\"index [.kibana_task_manager_1] blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];\\\"},\\\"status\\\":403},{\\\"index\\\":\\\".kibana_task_manager_1\\\",\\\"type\\\":\\\"_doc\\\",\\\"id\\\":\\\"task:Lens-lens_telemetry\\\",\\\"cause\\\":{\\\"type\\\":\\\"cluster_block_exception\\\",\\\"reason\\\":\\\"index [.kibana_task_manager_1] blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];\\\"},\\\"status\\\":403}]}\"}"}

and if I check the license from Kibana UI it works:

{
  "license" : {
    "status" : "active",
    "uid" : "xxx",
    "type" : "basic",
    "issue_date" : "2020-02-20T16:46:08.569Z",
    "issue_date_in_millis" : 1582217168569,
    "max_nodes" : 1000,
    "issued_to" : "elasticsearch-logs",
    "issuer" : "elasticsearch",
    "start_date_in_millis" : -1
  }
}

FORBIDDEN/12/index read-only are your Elasticsearch nodes running out of disk space by any chance?

I am thinking of https://www.elastic.co/guide/en/elasticsearch/reference/current/disk-allocator.html where Elasticsearch sets your indices to read-only when you exceed 95% disk usage (by default, can be configured)

1 Like