ES Logstash config problem

hi

where can i set pool size , active threads , queued tasks

i get this error

[2017-03-13T12:08:29,434][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 429 ({"type"=>"es_rejected_execution_exception", "reason"=>"rejected execution of org.elasticsearch.transport.TransportService$7@5ed07841 on EsThreadPoolExecutor[bulk, queue capacity = 50, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@60e1eefe[Running, pool size = 16, active threads = 16, queued tasks = 50, completed tasks = 2319302]]"})
[2017-03-13T12:08:29,434][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 429 ({"type"=>"es_rejected_execution_exception", "reason"=>"rejected execution of org.elasticsearch.transport.TransportService$7@61426041 on EsThreadPoolExecutor[bulk, queue capacity = 50, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@60e1eefe[Running, pool size = 16, active threads = 16, queued tasks = 50, completed tasks = 2319302]]"})
[2017-03-13T12:08:29,434][ERROR][logstash.outputs.elasticsearch] Retrying individual actions
[2017-03-13T12:08:29,434][ERROR][logstash.outputs.elasticsearch] Action
[2017-03-13T12:08:29,434][ERROR][logstash.outputs.elasticsearch] Action

my config in ES.yml
thread_pool.index.size: 15
thread_pool.index.queue_size: 1000

my config in LS.yml
[2017-03-13T12:02:15,480][INFO ][logstash.pipeline ] Starting pipeline {"id"=>"main", "pipeline.workers"=>8, "pipeline.batch.size"=>600, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>4800}

Prior to making any setting modifications. What does the stability of your elasticsearch cluster look like?
How many nodes to you have and how many indices & shards per node?

Hi Jymit,

thanks for your answer and for your help. Cluster still running, I see only that many of msg-s are not in the system. There is 3 virtual mashines 16core 128Gb ram 2,4, 1 ,1 Tb. Here running 3 ES and 5 LS and 1 Kibana. Every ES use 30 Gb Ram, and the LS 12 Gb. 3 LS starting with -w 8 and the 2 node2 starting with -w 7
es cluster create daily 1 indeces ES_STACK_SIZE="1m" ES_SHARDS_NUMBER="24" and after 10 days curator start a cleanup.

cheers
Krisz

Hi

any idea?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.