Bulk api queue becomes full

Hi everyone,

we are facing some strange behavior regarding bulk indexing at the moment. We try to index 300 million documents (0.6kb each). This is done using bulks of size 10,000 using 6 clients. This works fine. However, after some time the bulk queue is full and the bulk requests fail.

The setup:

  • cluster: 3 nodes (4 cores, 24gb ram of which 12gb are used for heap)
  • the index has 12 shards, while indexing index refresh is disabled and shards are set to 0
  • we use the Java API BulkRequestBuilder and initialize it after every get()

We don't understand why the queue becomes full. Does anyone have a suggestion? Is the cluster to small? Any important settings we missed?

We evaluated the cluster using the rally tool, as well. Indexing 10 million documents using 6 clients works without any problems. This resulted in a throughput of 30,000 documents per second. Using our cluster we have a throughput of 10,000 documents. Any suggestions why we see such a huge difference in throughput. Additionally, we only see problems if we increase the number of clients to 40!

Hi @lars.steinbrueck,

So you've already set index.refresh_interval to -1. That's definitely a good idea.

One further reason for the slowdown could be that your cluster starts to throttle indexing. You want to consider the indexing performance tips in the Definitive Guide, especially the ones about segment merging.

As the main author of Rally, I am glad to hear that. :slight_smile:

It's a bit hard to tell remotely. First of all, which version of Rally did you use? Which command line flags did you specify? (specifically, did you run Rally on a dedicated host and used --pipeline=benchmark-only)?

My best bet is that if you run the benchmark on the full corpus of 300 million documents, you see a similar pattern.

I don't get that. Where do you increase the number of clients to 40 (in the benchmark?). Which problems did you see then? If you meant that you ran with 40 clients in the benchmark, you need to check whether you didn't create a bottleneck on the driver (if you have very beefy hardware this might still be ok though).


Hi Daniel,

thx for the fast reply.

We are using rally version 0.4.3. The command we use is esrally --target-hosts=LIST_OF_HOSTS --pipeline=benchmark-only --track=OUR_OWN_TRAK --challenge=append-no-conflicts-index-only. We used our own data and adapted the geopoint track for our needs. (as a comment, you should state in the Readme of rally that git >= 1.9 is needed).

We ran into the problem of full queue when we increase the number of clients to 40 in rally. We just wanted to see, in which manner we have to alter the configuration to run into the same problems as with our application.

Is our setup reasonable to handle bulk request from 6 clients? Or should we increase the cluster size (more is always better but we have limited resources).

Is it true, that if we receive failed bulk requests some of the documents are indexed and some are not (those who failed)?


Hi Lars,

That's great! If you have some feedback where you struggled, any oddities or missing documentation I am always eager to hear that so I can improve the user experience in this area.

It's fixed now. Thanks for the hint.

Ok, understood. 40 is really a very high number for one machine and I'd assume that you already experience all sorts of contention. With some patience, you can easily check that: You start with e.g. 1 client and measure achieved throughput and always double that number and check if also throughput doubles. At one point, Elasticsearch might become your bottleneck, so ideally you replace it with a mock (e.g. an ngnix that will just accept the data and provide a static response).

It depends. If this is a one-time import I wouldn't bother too much (back of the envelope calculation shows that it should be done in less than 3 hours if I didn't make a mistake). If this is a regular batch job, it depends on your time budget and your estimation of the growth rate of your data.

Sure you can add more nodes but you could also play with the configuration, e.g. increase the size of the bulk queue (but this will only defer problems, so I'd really think hard about that).

Nevertheless, your application should be prepared for bulk rejections and should just retry the items that got rejected. It's basically just a way for Elasticsearch to apply back pressure and tell you that the system is operating at capacity and you (as a client) should slow down. If you are on a recent version of Elasticsearch you should evaluate if you want to use the BackOffPolicy feature of BulkProcessor which automatically retries all items that failed due to a EsRejectedExecutionException after a configurable waiting period.

Correct. Each item has an associated status that tells you that (see BulkItemResponse).