ES 6.x - Timeout by REST API operations

@Christian_Dahlqvist I've just read about the _nodes/hot_threads API and thought it would be useful to find out where the resources are used as it could be useful to determine the bottleneck, however, I have difficulties interpreting the results, here is the summary of the threads for the main node:

74.5% (372.4ms out of 500ms) cpu usage by thread 'elasticsearch[es1][masterService#updateTask][T#1]'
21.7% (108.3ms out of 500ms) cpu usage by thread 'elasticsearch[es1][write][T#3]'
17.4% (87.1ms out of 500ms) cpu usage by thread 'elasticsearch[es1][write][T#4]'
15.5% (77.5ms out of 500ms) cpu usage by thread 'elasticsearch[es1][http_server_worker][T#6]'
15.5% (77.4ms out of 500ms) cpu usage by thread 'elasticsearch[es1][write][T#2]'
13.4% (67.2ms out of 500ms) cpu usage by thread 'elasticsearch[es1][write][T#5]'
10.0% (50ms out of 500ms) cpu usage by thread 'elasticsearch[es1][write][T#6]'
7.8% (38.8ms out of 500ms) cpu usage by thread 'elasticsearch[es1][write][T#1]'
5.7% (28.7ms out of 500ms) cpu usage by thread 'elasticsearch[es1][refresh][T#2]'
5.6% (28.2ms out of 500ms) cpu usage by thread 'elasticsearch[es1][write][T#8]'
4.8% (23.8ms out of 500ms) cpu usage by thread 'elasticsearch[es1][write][T#7]'
4.4% (21.9ms out of 500ms) cpu usage by thread 'elasticsearch[es1][http_server_worker][T#10]'
1.1% (5.2ms out of 500ms) cpu usage by thread 'elasticsearch[es1][management][T#2]'
0.0% (117.5micros out of 500ms) cpu usage by thread 'ticker-schedule-trigger-engine'
0.0% (0s out of 500ms) cpu usage by thread 'ml-cpp-log-tail-thread'
0.0% (0s out of 500ms) cpu usage by thread 'elasticsearch[keepAlive/6.7.2]'
0.0% (0s out of 500ms) cpu usage by thread 'DestroyJavaVM'
0.0% (0s out of 500ms) cpu usage by thread 'process reaper'
0.0% (0s out of 500ms) cpu usage by thread 'Connection evictor'

So it seems a lot of write threads. After I try to update a pipeline (with timeout), a new thread appears named 'elasticsearch[es1][clusterApplierService#updateTask][T#1]'. After trying to add an ILM policy the 'elasticsearch[es1][http_server_worker][T#6]'thread takes most of the time.

I've saved the all stack trace in case details for specific threads are needed.

Anyway, as I suspect the bulk and ingest operation to hog the resources (without any concrete proof, relying only on my observation that data seems to be imported in Elasticsearch just fine), I plan to make a scheduled maintenance and disabled data and ingest role on the cluster for a finite time while I'm trying to do a few operations which I hope will help (especially when it comes to absent failiure handling).