Cluster node doesn't seem to be doing as much as the others


(Arthur Francis) #1

I have setup a cluster of 3 ES nodes (5.1.1), all three nodes are listed in the elasticsearch output plugin in logstash and the cluster is healthy.

Looking at the thread_pool, one of my nodes does not seem to be doing as much as the others and it is not a master.

Every now and again it processes one bulk request but most of the time its only one management thread active.

ES master/data
ES1 data
ES2 data

node_name   name                active queue rejected
elk-poc-es2 bulk                     4    10       14
elk-poc-es2 fetch_shard_started      0     0        0
elk-poc-es2 fetch_shard_store        0     0        0
elk-poc-es2 flush                    0     0        0
elk-poc-es2 force_merge              0     0        0
elk-poc-es2 generic                  0     0        0
elk-poc-es2 get                      0     0        0
elk-poc-es2 index                    0     0        0
elk-poc-es2 listener                 0     0        0
elk-poc-es2 management               1     0        0
elk-poc-es2 refresh                  0     0        0
elk-poc-es2 search                   0     0        0
elk-poc-es2 snapshot                 0     0        0
elk-poc-es2 warmer                   0     0        0
elk-poc-es  bulk                     4    49        0
elk-poc-es  fetch_shard_started      0     0        0
elk-poc-es  fetch_shard_store        0     0        0
elk-poc-es  flush                    0     0        0
elk-poc-es  force_merge              0     0        0
elk-poc-es  generic                  0     0        0
elk-poc-es  get                      0     0        0
elk-poc-es  index                    0     0        0
elk-poc-es  listener                 0     0        0
elk-poc-es  management               1     0        0
elk-poc-es  refresh                  0     0        0
elk-poc-es  search                   0     0        0
elk-poc-es  snapshot                 0     0        0
elk-poc-es  warmer                   0     0        0
elk-poc-es1 bulk                     1     0        0
elk-poc-es1 fetch_shard_started      0     0        0
elk-poc-es1 fetch_shard_store        0     0        0
elk-poc-es1 flush                    0     0        0
elk-poc-es1 force_merge              0     0        0
elk-poc-es1 generic                  0     0        0
elk-poc-es1 get                      0     0        0
elk-poc-es1 index                    0     0        0
elk-poc-es1 listener                 0     0        0
elk-poc-es1 management               1     0        0
elk-poc-es1 refresh                  1     0        0
elk-poc-es1 search                   0     0        0
elk-poc-es1 snapshot                 0     0        0
elk-poc-es1 warmer                   0     0        0

It also seems most under utilised

ip             heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
10.0.0.1              58          98  46    3.58    3.48     3.19 mdi       -      elk-poc-es2
10.0.0.2           17          98  74    5.19    4.68     4.75 mdi       *      elk-poc-es
10.0.0.3             28          98  16    0.77    0.87     0.82 mdi       -      elk-poc-es1

(Mark Walkom) #2

Is it causing issues?


(Christian Dahlqvist) #3

Are shards being indexed into distributed evenly across the nodes?


(Arthur Francis) #4

@warkolm it is not causing issues but I was investigating a bottleneck and this looked very strange

@Christian_Dahlqvist The shards are distributed pretty evenly across the nodes

shards disk.indices disk.used disk.avail disk.total disk.percent host           ip             node
    51        7.2gb    12.1gb       27gb     39.2gb           30 10.195.3.69    10.195.3.69    elk-poc-es2
    50        6.8gb    11.2gb       28gb     39.2gb           28 10.195.134.187 10.195.134.187 elk-poc-es
    51        8.8gb    13.4gb     25.7gb     39.2gb           34 10.83.31.213   10.83.31.213   elk-poc-es1

(system) #5

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.