I am trying to Benchmark my ES (5.1.1) stand alone. I am pushing from 3 servers 5000 docs from each (rsyslog to kafka). Enabled 3 logstash instances to read from kafka to ES. But getting this
retrying failed action with response code: 429
I am monitoring my ES with kibana. There I am seeing my CPU as 50 to 60%. Heap is also fine. Everything is normal. Still why I am getting this error??
My ES default config changes:
Switched off the swap (sudo swapoff -a)
refresh interval: 30sec
replicas: 0
indices.memory.index_buffer_size: 30%
index.store.type: mmapfs
bootstrap.memory_lock: true
It is hard to say without looking at it, but there are usual bits of advice:
HTTP 429 is the signal that you are pushing harder than some node can handle. If you backoff and retry everything should be fine. If you want to know why you can't handle that load then keep going.
Check the logs for messages about throttling.
Check the io statistics.
Check for uneven load. If you have the default number of shards (5) then a three node cluster is going to end up uneven.
Make sure you are importing using _bulk (logstash will do this for you so this should be noop)
Make sure you mapping makes sense and doesn't have any really expensive things in it.
Make sure the node stats reports the right number of CPUs.
Have a look at the hot_threads API and make educated guesses about what it is doing.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.