Thread_pool [management]

(Néstor R Bolívar E ) #1

Could anyone explain what this thread does?! It seems that only this thread is consuming the most resources on my cluster yet I can't find how to tune it.

(Mark Walkom) #2

What version are you on?

(Néstor R Bolívar E ) #3

Hi mate,

5.5 Docker image - saw somewhere there was a bug in a previous version related to this thread_pool but i'm using the current stable.


(Mark Walkom) #4

Can you show the output from _cat/threadpool?

(Néstor R Bolívar E ) #6

Will do mate @warkolm . Not useful now as I'm recovering the cluster.

In general all the values are in 0 the only one that increases is management with a max value of 5. the search thread increases as well when searching something - part of the config file

size: 40
queue_size: 10000
size: 2
queue_size: 300
size: 2
queue_size: 200
core: 1
max: 3
keep_alive: 5m

I'm disabling X-Pack module as the issue was introduced when upgrading from 2.4 to 5.5 (both on Docker and docker includes X-Pack)

    enabled: false
        type: local
    enabled: false
    enabled: false
    enabled: false
    enabled: false
        direct_access: true

This is only a shot to see if it works... Takes a while to recover the cluster, I'll upload more info in a few hours.

(Néstor R Bolívar E ) #7

At this stage my cluster my cluster is with routing allocation: none

"cluster.routing.allocation.enable": "none"

And I can see this output - the other values are on 0

[root@prd-log-dnode-1-6 ~]$ curl -sL  -XGET 'localhost:9200/_cat/thread_pool?v'|egrep management|sort
es-prd-log-knode-1-7 management               1     0        0
es-prd-log-knode-2-7 management               2     0        0
es-prd-log-knode-3-7 management               1     0        0
prd-log-dnode-1-6    management               1     0        0
prd-log-dnode-2-6    management               1     0        0
prd-log-dnode-3-6    management               1     0        0
prd-log-node-1-5     management               1     0        0
prd-log-node-2-5     management               1     0        0
prd-log-node-3-5     management               2     0        0


(Néstor R Bolívar E ) #8

Thread [management] was consuming high percentage of resources on data nodes. The issue seems to disappear when migrating data nodes file systems from LVM-xfs to plain file system (ext4).

Infrastructure: ES running on AWS. 
Instance type: i3.2xlarge instances with NVME disks. 
DB size: 2.4TB across 3 data nodes. 
Docker version 17.05.0-ce, build 89658be
ES version: FROM
KV Store: Consul v0.8.5 
OS: CentOS Linux release 7.3.1611

(system) #9

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.