Response code 429

I have logstash 2.2.2 installed in my server.
Getting the error "Retrying failed action with response code :429" on the logstash node. This means that the ES cluster is not able to keep up with the incoming requests, is that correct?

What are the configuration changes needed on ES node to remediate this error?

Yes.

Well your cluster is overloaded, so more nodes would help :slight_smile:

so here is complete picture we have a 5 node ES cluster with 1 dedicated master node and one client (on which kibana is installed). We have 3 logstash nodes pointing towards the cluster.
Each of the nodes have 398GB of RAM and 40 cores, for testing purposes we are doing 1:1 btw logstash and ES node. I see the 429 error once we cross the limit of 8K EPS.

Things we changed from default :slight_smile:
ES version 2.2
ES_HEAP_SIZE ;31 G
MAX_OPEN_FILES 65535
index.refresh_interval 30s

Shouldn't these system be able to handle 15-20k EPS without breaking sweat?
I'm sure I'm missing something we basic here :slight_smile:

I would suggest you read this page on some configurations that will help optimize performance in your cluster:
https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-configuration.html

Specifically the notes about disabling swap, and setting your vm.max_map_count, these make an ENORMOUS difference. It might also be pertinent to increase your # of file descriptors at some point, my system has crashed because it broke that limit before.

Some other good ideas to improve performance are outlined in this article:

Hope that helps!

Oh I just saw your configs, that looks correct, so I hope the second article helps!

I'm using logstash's elasticsearch output plugin to write logs to elasticsearch.
Does elasticsearch use BULK API for this?

Yes it does.

I would like to jump in ask if there is any solution for this.

We notice this error in new logstash 5.x logs. This stuff not happening everyday but they do happen a lot. Seem like it is not caused by elasticsearch overload. All status such as heap size and other health information are good for elasticsearch nodes at that time.

Any other reason this happen?

1 Like

We are experiencing the same issue. Our environment does not seem to be overloaded, but this log keeps appearing in logstash logs. The only real problem I have noticed is space occupied by those logs :slight_smile: