Elasticsearch 6.2.2 after X-pack installation cluster health decreased

As a new ELK stack user I installed X-pack trial version on Elasticsearch and Kibana on ELK stack on Windows 10. After installation new indices are generated in Elasticsearch and the cluster health decreased from green to yellow. As far as read on this forum and Google this is caused by the presence of redundant shards and since I have a single node they cannot be generated. However, I do not know how to remedy the issue and turn the cluster health to green again. Could you please assist me in this matter?

The output of GET /_cluster/health?pretty is as follows:

{
  "cluster_name": "elasticsearch",
  "status": "yellow",
  "timed_out": false,
  "number_of_nodes": 1,
  "number_of_data_nodes": 1,
  "active_primary_shards": 79,
  "active_shards": 79,
  "relocating_shards": 0,
  "initializing_shards": 0,
  "unassigned_shards": 65,
  "delayed_unassigned_shards": 0,
  "number_of_pending_tasks": 0,
  "number_of_in_flight_fetch": 0,
  "task_max_waiting_in_queue_millis": 0,
  "active_shards_percent_as_number": 54.861111111111114
} 

The output of GET _cat/indices?v is as follows:

health status index                           uuid                   pri rep docs.count docs.deleted store.size pri.store.size
green  open   .kibana                         j2vMz0bYQVK4reGbYBOdcA   1   0         23            7       78kb           78kb
green  open   .watches                        JbuSKAaMRv6MmDz2KQZHAw   1   0          6            0     76.3kb         76.3kb
green  open   .watcher-history-7-2018.04.02   aBINZfTBSuSV8Mnc4Iw69g   1   0       2826            0      3.2mb          3.2mb
yellow open   sql_session                     pYrP2lWhTzGWB-fFnMty_A   5   1         20            9    125.9kb        125.9kb
green  open   .monitoring-es-6-2018.04.02     QxYMpPQoTi2OhFGB4YDFbg   1   0      53719         1242     30.2mb         30.2mb
yellow open   logstash-2015.05.18             x3paZqEfQCe-8mXpF3T0QA   5   1       4631            0     22.5mb         22.5mb
yellow open   shakespeare                     ebe0VjMMSlGer0FuYgaOew   5   1     111396            0     21.5mb         21.5mb
green  open   .watcher-history-7-2018.04.03   Yx2b4YRYQa6GqtbDNz8Zng   1   0       5049            0      5.9mb          5.9mb
green  open   .monitoring-kibana-6-2018.04.02 Wro6mUOhRliSZHQQeXbwJQ   1   0        708            0    261.8kb        261.8kb
yellow open   sql_processes_training          vruZLRyvRlqBIJAbgtT6Qg   5   1          2            0     17.8kb         17.8kb
green  open   .triggered_watches              8GtWKxjdTn2zMwFcbtbaiQ   1   0          6          102    274.4kb        274.4kb
green  open   .monitoring-kibana-6-2018.04.04 lsKF-ubwTTuSiZz0mWNNLg   1   0        528            0    219.2kb        219.2kb
yellow open   sql_processes_ykb               rWkiLp1BQWWmc-jVPIz6VA   5   1          2            0     12.4kb         12.4kb
yellow open   logstash-2018.03.21             QETk3GO3SNeiqua2i144pQ   5   1          5            0     21.9kb         21.9kb
yellow open   bank                            j9D6ADrMRx-pNJHpM8KVLg   5   1       1000            0    475.1kb        475.1kb
yellow open   sql_tables_ykb                  6tmziUk-QXWKrDdCtrg2mw   5   1          5            0     50.1kb         50.1kb
green  open   .watcher-history-7-2018.04.04   KLmX5sxOTUKecM4ZyRgOAg   1   0        734            0    957.3kb        957.3kb
green  open   .monitoring-es-6-2018.04.04     Tsr5P-LPRk2u24RxaAlhzg   1   0      17513         1152     10.9mb         10.9mb
green  open   .monitoring-es-6-2018.04.03     4UppzqPERVi2hEBAi0bl_A   1   0     107239          846     60.6mb         60.6mb
green  open   .security-6                     RuPJzrErTdqAqiT5PKSpSQ   1   0          8            0     31.7kb         31.7kb
yellow open   sql_tables_training             r8UcunkKR6q1eQb8iVzIGA   5   1          7            0     55.2kb         55.2kb
yellow open   sql_queues_training             O7iXdlOlQaS-QQHK_nXcRw   5   1          4            1       42kb           42kb
yellow open   logstash-2015.05.19             aWzygIVZTzCs0m4mKUXaqA   5   1       4624            0     23.4mb         23.4mb
yellow open   sql_queues_ykb                  c99-wfuYRxWQgdx1-gxObQ   5   1          1            0     13.1kb         13.1kb
yellow open   logstash-2015.05.20             NxA7gJCOSemkhdqFYUjLFQ   5   1       4750            0     22.2mb         22.2mb
green  open   .monitoring-kibana-6-2018.04.03 88gULRCUT2SFNMOizylfUw   1   0       1166            0    422.5kb        422.5kb
green  open   .monitoring-alerts-6            Zcynu28fRmiXxAnvmTD4Fw   1   0          3            0       24kb

Hi Ongun,

Since you're running on a single node you'll want to set all your indices to have 0 replicas. For example if you look at logstash-2015.05.20 you can see it has replicas set to 1:

yellow open logstash-2018.03.21 QETk3GO3SNeiqua2i144pQ 5 1 5 0 21.9kb 21.9kb

To dynamically update the number of replicas on an existing index you can make an API call similar to the below:

PUT http://{{url}}:9200/{{index}}/_settings

{
  "index": {
	"number_of_replicas": "0"
  }
}

To also prevent this happening on new indices you'll want to set up some templates that set replicas accordingly; docs here: https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-templates.html

Hope that helps

Cheers,
Mike

1 Like

Thanks for the reply, my situation is resolved. How about the column to the left of number of replicas, is having more than 1 primary shards on a single node Elasticsearch a recommended situation?

Having more than one primary shard on a node should be fine, however you won't see the main benefits of sharding (Horizontally scale your data volume, and also split operations across multiple shards usually living on multiple nodes)

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.