Doubt shard allocation balance between nodes

Hi,

I have a doubt about shard node allocation in elasticsearch. Maybe I missed any page of documentation, but I didn't find a solution for my issue.

I have a elasticsearch cluster with n number of nodes, and I have splitted them into two groups.

One grouped and tagged as node.type: type1 and group two as node.type: type2.

Then I have two index_template (v2) with index.routing.allocation.include.type: type1 and the another with index.routing.allocation.include.type: type2.

I have too monitoring with metricbeat on the same cluster (I know the recommendation about use other cluster, but we can't have another cluster). So my cluster have some system indices additionals (kibana, metricbeat, etc).

So we gonna suppose this example.

  • I have 8 nodes, 6 of them tagged as type1 and 2 tagged as type 2.
  • I have 6 indices of type1 and all of them was allocated in first 6 nodes. All works fine.
  • When I check shards on every node in monitoring kibana page, I faced that, for example, node 1 have 1 shard of every index of type 1, but for example node 5 have all shards of internal indices (metricbeat, async_query, etc).

So my node 1 have so much cpu and IO usage, because have more shards of type1 index that node 5 that only have internal indices and haven't hardly any usage.

My question is: Is there any config to distribute shard of my type1 index more equitative?

In kibana, I see that all nodes have the same number of shards, but some nodes have more shards of internal indices.

Thanks in advance,
Adrián.

What is the output from _cat/allocation?v?

shards 	disk.indices 	disk.used 	disk.avail 	disk.total 	disk.percent 	host                                 
11	        26gb 	   39.1gb 	   255.6gb 	   294.7gb 	13	node01.xxx
11	      44.5gb 	   58.1gb 	   236.6gb 	   294.7gb 	19	node02.xxx
11	      39.9mb 	   13.1gb 	   281.6gb 	   294.7gb 	4	node03.xxx
11	      28.5gb 	     42gb 	   252.7gb 	   294.7gb 	14	node04.xxx
10	      14.3gb 	   27.4gb 	   267.3gb 	   294.7gb 	9	node05.xxx
11	      34.8gb 	   48.1gb 	   246.6gb 	   294.7gb 	16	node06.xxx

I filter names for private reasons.

In this case, node05 have all shards of indices .kibana, .slm-history, .tasks .filebeat-xxx, .ilm-history, .metricbeat-xxx.
But node02, for example, have 3 shards of 3 different indices of type1.

So all nodes have 10-11 shards, elastic balance number of shards correctly, but I would like balance shards of my type1 more equitative. And then allocate rest of shards of other indices as he wants.

Until now, I did this manually with POST /_cluster/reroute endpoint.

Thanks,
Adrián.

Hello,

@warkolm any ideas or suggestions?

Thanks,
Adrián.

Elasticsearch tries indeed to balance the total number of shards per node whatever the index is made for.
May be you would like to have another node with type3 so you can allocate all the system indices on it only?

Would that solution work for you?

If not, could you share the output of the following command?

What is the output of:

GET /_cat/shards?v

If some outputs are too big, please share them on gist.github.com and link them here.

Hi @dadoonet
First, thanks for fast reply.

I could try this solution.
Is there any easy way to config all default templates of system indices with that config? Or default template to allocate by default all indices if not overwrite with another template?

For example, for metricbeat monitoring of stack there are this legacy (v1) system templates

  • .monitoring-alerts-7
  • .monitoring-beats
  • .monitoring-es
  • .monitoring-kibana
  • .monitoring-logstash

But kibana have so many internal indices too. About 40 at the moment (some of that empty).

Thanks,
Adrián,

Can you please list these?

If you want more even distribution of shards for a specific set of indices you can use the max shards per node index settings, but be careful to not limit it so that shards can not be reallocated on node failure.

All system indices listed in the cluster.
GET _cat/indices/.*?v&pretty&s=index

health status index                              uuid                   pri rep docs.count docs.deleted store.size pri.store.size
green  open   .apm-agent-configuration           X5iuV1jrRS27NjV9l5lGvw   1   1          0            0       416b           208b
green  open   .apm-custom-link                   6Bl9DIL2T1CxC3ydscFOmQ   1   1          0            0       416b           208b
green  open   .async-search                      xanl5iJARMKOyRQ9tj8vNQ   1   1          0            0      3.5kb          3.3kb
green  open   .kibana-event-log-7.9.1-000001     wRsYBK-XRzGcyA5vEJ6HPw   1   1          3            0     32.7kb         16.3kb
green  open   .kibana_7                          p2dil7NxQem7BGHPmmIEuw   1   1       4428         1817     24.4mb         12.2mb
green  open   .kibana_task_manager_1             25bIKrF8QD6f-oj3mTVUEw   1   1          5            1     40.7kb         20.3kb
green  open   .kibana_task_manager_2             g0jmxU8QT1OOnwoJjcvOxA   1   1          6        28964     21.4mb         10.3mb
green  open   .monitoring-beats-7-2020.10.02     qOlSwJTFTEanEA7eXE0StQ   1   1     190308            0      185mb         92.9mb
green  open   .monitoring-beats-7-2020.10.03     INEth9SUQwOAHRGh79GmUg   1   1     191520            0    170.9mb         85.4mb
green  open   .monitoring-beats-7-2020.10.04     eZUrCX2HQm6SsfzQgHSRog   1   1     191520            0    168.6mb         83.3mb
green  open   .monitoring-beats-7-2020.10.05     VzTW402gR-C4rpey7xcn6g   1   1     191520            0      172mb         86.3mb
green  open   .monitoring-beats-7-2020.10.06     mkbGkvXRQKWSksOGtjGKcA   1   1     191511            0    176.7mb         88.3mb
green  open   .monitoring-beats-7-2020.10.07     XTnliBKXRUmL6DPCIVytBQ   1   1     191520            0    175.3mb         87.5mb
green  open   .monitoring-beats-7-2020.10.08     SQ7-eKN4RxCPp0O29NKg3Q   1   1      84739            0     79.4mb         39.4mb
green  open   .monitoring-es-7-mb-2020.10.02     4SgXRNGvQHCPGxUl7NurGA   1   1     758169            0      1.1gb        595.5mb
green  open   .monitoring-es-7-mb-2020.10.03     j05LMbDkQRmB5QjmitL6KQ   1   1     738133            0      1.1gb        584.8mb
green  open   .monitoring-es-7-mb-2020.10.04     Sf59rC6uS4iklHK1-4lz1Q   1   1     738010            0      1.1gb        576.4mb
green  open   .monitoring-es-7-mb-2020.10.05     yDCtIzUIQjS1LP96JNwwuw   1   1     740222            0      1.1gb          592mb
green  open   .monitoring-es-7-mb-2020.10.06     P29cUhqNR9ykruSoH3oh2A   1   1     738240            0      1.1gb        582.3mb
green  open   .monitoring-es-7-mb-2020.10.07     2Y3kLItgQNqyk-2eOUhrFw   1   1     737912            0      1.1gb          585mb
green  open   .monitoring-es-7-mb-2020.10.08     mJx9vH7aRt-7KIe4s8MYaw   1   1     266508            0    441.8mb        221.5mb
green  open   .monitoring-kibana-7-mb-2020.10.02 3-3pb8f5QoGQqL7TxOdXzQ   1   1      17051            0      7.3mb          3.6mb
green  open   .monitoring-kibana-7-mb-2020.10.03 heQKDKZ9T9i28s6LlFLdug   1   1      17280            0      6.6mb          3.2mb
green  open   .monitoring-kibana-7-mb-2020.10.04 TbO0WKJ1S8yOyrNUOeT3SA   1   1      17280            0      6.9mb          3.5mb
green  open   .monitoring-kibana-7-mb-2020.10.05 dNrA0PDJR5WcwA9YzpxbRw   1   1      17280            0      7.1mb          3.5mb
green  open   .monitoring-kibana-7-mb-2020.10.06 UodFfvjKQAqDV7jEtWYMAA   1   1      17279            0      7.1mb          3.5mb
green  open   .monitoring-kibana-7-mb-2020.10.07 kt0Gkw76RC6NqNx29QekZw   1   1      17280            0      7.1mb          3.5mb
green  open   .monitoring-kibana-7-mb-2020.10.08 oU0XGz2IRKSIIEqXcI1APw   1   1       6214            0      2.7mb          1.3mb
green  open   .reporting-2020.01.12              JTbOY1VNQd6pNEsP2GxLfg   1   1          7            1     66.2mb         33.1mb
green  open   .slm-history-2-000002              twq7aBjpSOWM8nllTomJdw   1   1         59            0     74.9kb         37.4kb
green  open   .slm-history-2-000003              HGc-WHgiSliy-iq4ugdlaA   1   1         67            0     58.3kb         29.2kb
green  open   .slm-history-2-000004              dkUXCv5ORdq3IegPhR8KVg   1   1         73            0       61kb         30.5kb
green  open   .slm-history-2-000005              1-_20TiNQ6qAmQzZbL8HVw   1   1         52            0       54kb           27kb
green  open   .tasks                             qMu7RhvJSmG9b24z9qnP4g   1   1         10            0    122.2kb         64.2kb

I think that except .monitoring indices, all another indices are internal.
I don't know why there are apm indices for example. In this cluster I haven't apm.
I already deleted manually some old slm-history-1 and slm-history-2-000001 indices and another empty indices.

Thanks,
Adrián.

That looks OK.

Yes, the question is about custom allocate of all internal indices. I think is the correct way to focus this.

Thanks anyways for the information :grinning:,
Adrián.

I believe you can create an index template which applies to indices named .*.

I workaround the issue with your idea. I'm not sure if could be useful have index priority allocation of shards and then anothers indices as feature of elasticsearch.
Anyways, thanks so much!

Adrian.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.