Error in logstash logs: " retrying failed action with response code: 429

Hi,
I keep getting the below error in my logstash logs - im assuming the problem is with elasticsearch rather than logstash itself. My understanding of this is that elasticsearch cannot cope with the amount of data that is being sent from logstash.
Could someone assist me regarding this issue please?

[2019-09-11T00:03:09,966][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 429 ({"type"=>"es_rejected_execution_exception", "reason"=>"rejected execution of processing of [1643093099][indices:data/write/bulk[s][p]]: request: BulkShardRequest [[logstash-2019.09.11][2]] containing [26] requests, target allocation id: Hp2yfFaZQ2qEHULsOkHpig, primary term: 1 on EsThreadPoolExecutor[name = elastic2/write, queue capacity = 200, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@411ee1d1[Running, pool size = 7, active threads = 7, queued tasks = 334, completed tasks = 907271405]]"})

As your understanding, your Elasticsearch cluster is unable to cope with current load. You must investigate why cluster is overload? not enough capacity, too many indexing, or uneven load?

1 Like

How many indices and shards are you actively indexing into?

1 Like

Currently I have 3 nodes, 330 indices and 1,640 shards to be specific.
As a solution to this problem we were considering to add an extra elasticsearch node as it seems that the load on the nodes always seem quite high.

Is there any problem with having 4 elasticsearch nodes with 1 master node?

@Ilayda_Akinalan,

Yes..its not good practice having only single master node in cluster. What will you do if your master node goes down? Your cluster will be down. So you must have at least 3 master node in cluster to keep cluster up and running and to avoid from split brain problem.

Thanks.

Are you indexing into all these shards actively? I would recommend reading this blog post.