Memory distribution on different nodes

Hi All,

I am new to ELK cluster model and facing few issues with it and need your help on this. Currently elasticsearch is running on 6 different nodes but memory is getting used mostly in 2 nodes. Can someone suggest , How to distribute it equally

Can you provide the output of the _cat/nodes?v and _cat/indices?v as formatted text please?

Hi @warkolm ,

PFB output for both commands


green open .security-7 PDrjVqc1QwmqqUJXk6VDww 1 1 42 0 180.3kb 90.1kb
green open .reporting-2020.05.31 qFbU9qhjTqS7cQIWPv4RFQ 1 1 3 0 628.5kb 314.2kb
green open .apm-custom-link _ByqTv6BQYOt2dNKkt5dnw 1 1 0 0 416b 208b
green open .reporting-2020.06.14 fIjpd4rlS0urUMPi3kT7UQ 1 1 1 0 174kb 103.7kb
green open datapower_visualization nwJK2o8IQRyoP-IJ24kSwg 1 1 2 0 49.8kb 24.9kb
green open .kibana_task_manager_1 dksbiM6-RvWSODaMq89VLw 1 1 5 1 83.4kb 30.6kb
green open .apm-agent-configuration iwXeMFcLSG-SsZcQzxiloA 1 1 0 0 416b 208b
green open .reporting-2020.06.07 IcXIe4DjT_aUnvnlb8sd8w 1 1 2 0 669.7kb 334.8kb
green open dox_visualization bL3qSXnoR-Cx8HqvUjO9gQ 1 1 422300806 0 553.5gb 280.5gb
green open .async-search yV2EemjQSEq5XhKO1GIUsA 1 1 56 3 43.9mb 21.9mb
green open .kibana_1 aQlc3MAeSbmjd5XrJNeAog 1 1 2623 17 1.2mb 616.9kb


ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
IP5 51 19 0 0.04 0.04 0.05 dilrt - datanode2
IP4 31 14 0 0.00 0.01 0.05 dilmrt - datanode1
IP1 20 16 2 0.28 0.28 0.30 lr - eai_coordinatingnode
IP3 64 15 0 0.02 0.03 0.05 dilmrt * elasticsearch_master
IP2 58 74 6 0.31 0.57 0.65 dilmrt - masternode1
IP6 47 94 3 0.23 0.34 0.32 dilrt - datanode3

Please don't post pictures of text, they are difficult to read, impossible to search and replicate (if it's code), and some people may not be even able to see them :slight_smile:

You have one index with a single primary shard that contains almost all data in the cluster. As only one replica is configured there is only a total of 2 shards which are located on the nodes more heavily loaded. If you want better distribution you need to increase the number of primary shards, e.g. through the split index api, or create additional copies of the shard by increasing the number of replicas.

As @Christian_Dahlqvist said, you only have one big index. You can check were the shards are with
_cat/shards . It would be no suprise to me if those shards are only on those 2 nodes.
Here you can find the Split Index API. The new replicas should be distributed across the cluster.

Thanks @defalt and @Christian_Dahlqvist for you your suggestions :slight_smile:

@warkolm I will take care of this in future, just pasted the pic for better visibility .

Few questions, Can I do runtime changes in shards using Dev Tools , Will increase in shards result in increase in Query time ?

Thanks in advance.

What do you mean by

If you want to split the index you have to set it to read only. So you can't insert data while reindexing.
Yes, the more shards you have the slower your search will be because you can only search one shard at a time. But it shouldnt make a significant difference.

Thanks @defalt

Yes, i was asking the same, so there will be a downtime for this activity.

Also, since we are using coordinating node, it can take care of query search , I believe.

Thanks a lot for your help.

Awesome :+1:. Please select on of the many answers as a solution so that others can find the answer more easily.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.