Could anyone explain what this thread does?! It seems that only this thread is consuming the most resources on my cluster yet I can't find how to tune it.
What version are you on?
Hi mate,
5.5 Docker image - saw somewhere there was a bug in a previous version related to this thread_pool but i'm using the current stable.
Cheers
Can you show the output from _cat/threadpool
?
Will do mate @warkolm . Not useful now as I'm recovering the cluster.
In general all the values are in 0 the only one that increases is management with a max value of 5. the search thread increases as well when searching something - part of the config file
thread_pool:
search:
size: 40
queue_size: 10000
bulk:
size: 2
queue_size: 300
index:
size: 2
queue_size: 200
warmer:
core: 1
max: 3
keep_alive: 5m
I'm disabling X-Pack module as the issue was introduced when upgrading from 2.4 to 5.5 (both on Docker and docker includes X-Pack)
xpack:
monitoring:
enabled: false
exporters:
my_local:
type: local
security:
enabled: false
ml:
enabled: false
graph:
enabled: false
watcher:
enabled: false
index:
rest:
direct_access: true
This is only a shot to see if it works... Takes a while to recover the cluster, I'll upload more info in a few hours.
Cheers
At this stage my cluster my cluster is with routing allocation: none
"cluster.routing.allocation.enable": "none"
And I can see this output - the other values are on 0
[root@prd-log-dnode-1-6 ~]$ curl -sL -XGET 'localhost:9200/_cat/thread_pool?v'|egrep management|sort
es-prd-log-knode-1-7 management 1 0 0
es-prd-log-knode-2-7 management 2 0 0
es-prd-log-knode-3-7 management 1 0 0
prd-log-dnode-1-6 management 1 0 0
prd-log-dnode-2-6 management 1 0 0
prd-log-dnode-3-6 management 1 0 0
prd-log-node-1-5 management 1 0 0
prd-log-node-2-5 management 1 0 0
prd-log-node-3-5 management 2 0 0
Cheers
Thread [management] was consuming high percentage of resources on data nodes. The issue seems to disappear when migrating data nodes file systems from LVM-xfs to plain file system (ext4).
Infrastructure: ES running on AWS.
Instance type: i3.2xlarge instances with NVME disks.
DB size: 2.4TB across 3 data nodes.
Docker version 17.05.0-ce, build 89658be
ES version: FROM docker.elastic.co/elasticsearch/elasticsearch:5.5.1
KV Store: Consul v0.8.5
OS: CentOS Linux release 7.3.1611
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.