Unexpected error while indexing monitoring document, lots of queued tasks

Hello,

Seems like my cluster has some issues since last two days. It appears that no new documents has been submitted since Oct 18th 21:30, even though i have multiple pipelines up and running in Logstash. I have 202 indices, 404 shards in total. Cluster status is green. I'm trying to find a way to fix it, but i found none as for now.
Here's one of the errors that pop's up in Elastic:

[2019-10-21T09:28:49,841][WARN ][o.e.x.m.e.l.LocalExporter] [elastic3] unexpected error while indexing monitoring document org.elasticsearch.xpack.monitoring.exporter.ExportException: RemoteTransportException[[elastic1][elastic1:9300][indices:data/write/bulk[s]]]; nested: RemoteTransportException[[elastic1][elastic1:9300][indices:data/write/bulk[s][p]]]; nested: EsRejectedExecutionException[rejected execution of processing of [117106864][indices:data/write/bulk[s][p]]: request: BulkShardRequest [[.monitoring-kibana-7-2019.10.21][0]] containing [index {[.monitoring-kibana-7-2019.10.21][_doc][tuA37W0BOl_5I5On7uIK], source[n/a, actual length: [2.2kb], max length: 2kb]}], target allocation id: -dv2nr_0RVCgSlswPIspaA, primary term: 1 on EsThreadPoolExecutor[name = elastic1/write, queue capacity = 200, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@22bc5015[Running, pool size = 24, active threads = 24, queued tasks = 41600, completed tasks = 9053723]]]; at org.elasticsearch.xpack.monitoring.exporter.local.LocalBulk.lambda$throwExportException$2(LocalBulk.java:128) ~[x-pack-monitoring-7.0.0.jar:7.0.0] ....... Caused by: org.elasticsearch.common.util.concurrent.EsRejectedExecutionException: rejected execution of processing of [117106864][indices:data/write/bulk[s][p]]: request: BulkShardRequest [[.monitoring-kibana-7-2019.10.21][0]] containing [index {[.monitoring-kibana-7-2019.10.21][_doc][tuA37W0BOl_5I5On7uIK], source[n/a, actual length: [2.2kb], max length: 2kb]}], target allocation id: -dv2nr_0RVCgSlswPIspaA, primary term: 1 on EsThreadPoolExecutor[name = elastic1/write, queue capacity = 200, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@22bc5015[Running, pool size = 24, active threads = 24, queued tasks = 41600, completed tasks = 9053723]] ....

Full error text pasted here due to character limit: https://paste.ofcode.org/HGgnWzUWzfh2Kdg6MdJkYw

One of the errors i got in Logstash:

Oct 21 09:17:34 logstash1 logstash[4254]: [2019-10-21T09:17:34,524][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 429 ({"type"=>"es_rejected_execution_exception", "reason"=>"rejected execution of processing of [117067800][indices:data/write/bulk[s][p]]: request: BulkShardRequest [[index_test][0]] containing [71] requests, target allocation id: YG2Kdnz3QsqEzXc-nZ3-vw, primary term: 5 on EsThreadPoolExecutor[name = elastic1/write, queue capacity = 200, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@22bc5015[Running, pool size = 24, active threads = 24, queued tasks = 41552, completed tasks = 9053723]]"})

Hi,
I am in the same situation and with no solution so far :confused:

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.