Elasticsearch health turns red when kibana is started and indices are in red state

Hi
We are installing 1 Master and 2 Data Node Setup.

While setting up master node the elasticsearch gets installed fine and health is green. But when kibana is setup and started, the health of elasticsearch turns red with below error

Caused by: org.elasticsearch.action.search.SearchPhaseExecutionException: Search rejected due to missing shards [[.kibana_task_manager_1][0]]. Consider using allow_partial_search_results setting to bypass this error.

//

[root@Elastic-Master ~]# curl -X GET "IP:9200/_cluster/allocation/explain?pretty"                                                   {
  "index" : ".ds-ilm-history-5-2021.04.20-000001",
  "shard" : 0,
  "primary" : true,
  "current_state" : "unassigned",
  "unassigned_info" : {
    "reason" : "INDEX_CREATED",
    "at" : "2021-04-20T02:20:59.130Z",
    "last_allocation_status" : "no"
  },
  "can_allocate" : "no",
  "allocate_explanation" : "cannot allocate because allocation is not permitted to any of the nodes"
}

//

1618885784 02:29:44 elk-prod01 red 1 0 0 0 0 0 11 0 - 0.0%

In Elasticsearch logs we get

        org.elasticsearch.action.search.SearchPhaseExecutionException: all shards failed
            at org.elasticsearch.action.search.AbstractSearchAsyncAction.onPhaseFailure(AbstractSearchAsyncAction.java:601) [elasticsearch-7.11.1.jar:7.11.1]
            at org.elasticsearch.action.search.AbstractSearchAsyncAction.executeNextPhase(AbstractSearchAsyncAction.java:332) [elasticsearch-7.11.1.jar:7.11.1]
            at org.elasticsearch.action.search.AbstractSearchAsyncAction.onPhaseDone(AbstractSearchAsyncAction.java:636) [elasticsearch-7.11.1.jar:7.11.1]
            at org.elasticsearch.action.search.AbstractSearchAsyncAction.onShardFailure(AbstractSearchAsyncAction.java:415) [elasticsearch-7.11.1.jar:7.11.1]
            at org.elasticsearch.action.search.AbstractSearchAsyncAction.lambda$performPhaseOnShard$0(AbstractSearchAsyncAction.java:240) [elasticsearch-7.11.1.jar:7.11.1]
            at org.elasticsearch.action.search.AbstractSearchAsyncAction$2.doRun(AbstractSearchAsyncAction.java:308) [elasticsearch-7.11.1.jar:7.11.1]
            at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:732) [elasticsearch-7.11.1.jar:7.11.1]
            at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26) [elasticsearch-7.11.1.jar:7.11.1]
            at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) [?:?]
            at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630) [?:?]
            at java.lang.Thread.run(Thread.java:832) [?:?]
    Caused by: org.elasticsearch.action.NoShardAvailableActionException

    Caused by: org.elasticsearch.action.search.SearchPhaseExecutionException: Search rejected due to missing shards [[.kibana_task_manager_1][0]]. Consider using `allow_partial_search_results` setting to bypass this error.
            at org.elasticsearch.action.search.AbstractSearchAsyncAction.run(AbstractSearchAsyncAction.java:208) ~[elasticsearch-7.11.1.jar:7.11.1]

//

 curl -i http://localhost:5601
HTTP/1.1 503 Service Unavailable
kbn-name: Elastic-Master
kbn-license-sig: 94b40c7c867954e4f0eca076276b3e2a60c6985deb6f33c1ecb13f5b051cc715
content-type: application/json; charset=utf-8
cache-control: private, no-cache, no-store, must-revalidate
content-length: 112
Date: Tue, 20 Apr 2021 02:36:14 GMT
Connection: keep-alive
Keep-Alive: timeout=120

{"statusCode":503,"error":"Service Unavailable","message":"all shards failed: search_phase_execution_exception"}

Can you please suggest and help here ?

When Kibana starts, it creates a bunch of special indices to manage its internal state (for example configurations of visualizations / dashboards and so on).

It seems like your Elasticsearch cluster is configured in a way the shards of these indices can't be allocated to any node. This renders Kibana unusable (and as there's an index which can't be fully allocated now, also sets the Elasticsearch cluster status to red).

Please make sure the Elasticsearch configuration allows for a way for Kibana to create indices and allocate them on a node - for example if
shard allocation filtering is enabled and excluding all of the data nodes, this problem could happen: Cluster-level shard allocation and routing settings | Elasticsearch Guide [7.12] | Elastic

1 Like

Thank you for update.

The current configuration in /etc/elasticsearch/elasticsearch.yml is

cluster.name: elk-prod01
node.name: elastic-master-01
path.data: /elasticdata/elasticsearch
path.logs: /var/log/elasticsearch
bootstrap.memory_lock: true
network.host: <Private local IP>
discovery.zen.ping.unicast.hosts: ["<Private local IP>"]
node.roles: [ master, ingest ]
xpack.ml.enabled: true
discovery.zen.minimum_master_nodes: 2
thread_pool.write.queue_size: 30000
indices.memory.index_buffer_size: 30%
indices.fielddata.cache.size: 30%
cluster.initial_master_nodes: ["<Private local IP>"]

and /etc/kibana/kibana.yml is

elasticsearch.requestTimeout: 90000
server.port: 5601
server.host: "0.0.0.0"
server.name: "elastic-master-01"
elasticsearch.hosts: ["<Private local IP>"]

Do you think some confs should be altered to get kibana console without 503 error ?

I tried deleting the index after stopping kibana, and the health of elasticsearch turns green during that time.
But after starting kibana, immediately the cluster state changes to red state again

Are the above conf correct for a master node to start with, considering we have to configure 2 data node after this ? Although kibana service starts on this master, the error of 503 remains.

Please suggest.

Are you saying at the time Kibana joining the cluster there's just a single node with only master and ingest role? If that's the case then there you got your problem - the shards of an index can only be allocated if there is a node with the data role in the cluster.

2 Likes

Thank you Joe. The node.roles was the problem, as you rightly pointed. The master node did not have data role, and the separate data nodes were not yet configured, so kibana had issue.

If we comment out node.roles, all the default roles are assigned, and it also works.

But then I created separate data nodes and then restarted the elasticsearch and kibana on master node with master, ingest privilege. Also the folders under datadir and old indices had to cleaned up before restart of services.

Thanks for your help again.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.