Bulk indexing. Not all documents inserted, but no errors

Using E.S 1.6.0

Hi I'm running a bulk process with the following java/vertx code. I tried bulking 192,000 documents but only 20,000 got indexed. This used to be fine I have indexed over 1.3 billion documents.

I'm looking at Bulk Thread pool in Marvel...
Bulk Thread Pool Count: 30 per node
Bulk Thread Pool Reject: 0 per node
Bulk Thread Pool Ops/sec: 3 per node
Bulk Thread Pool Largest Count: 30 per node
Bulk Thread Pool Queue Size: 0 per node

None of the logs report any throttling.

I have 20TB of disk storage of 13TB used (including replicas). So I'm way below the watermark for disk.

What else can I check?

Index settings:

"index" : {
"refresh_interval" : "30s",
"translog" : {
"flush_threshold_size" : "1000mb"
"number_of_shards" : "8",
"creation_date" : "1435262728426",
"analysis" : {
"analyzer" : {
"default" : {
"filter" : [
"type" : "custom",
"tokenizer" : "keyword"
"number_of_replicas" : "1",
"version" : {
"created" : "1060099"
"uuid" : "KtNOhb4qS6eFjM8BUxc9HA"

Java Bulk Code:

JsonObject body = message.body();
			JsonArray documents = body.getArray("documents");
			BulkRequestBuilder bulkRequest = client.prepareBulk();
			final Context ctx = getVertx().currentContext();
			for(int i = 0; i < documents.size(); i++)
				final JsonObject obj = documents.get(i);
				final JsonObject indexable = new JsonObject()
				.putString("action", "index")
				.putString("_index", obj.getString("index"))
				.putString("_type", obj.getString("type"))
				.putString("_id", obj.getString("id"))
				.putString("_route", obj.getString("routing"))
				.putObject("_source", obj);										
				final String index = getRequiredIndex(indexable, message);
				if (index == null) {
				// type is optional
				String type = indexable.getString(CONST_TYPE);
				JsonObject source = indexable.getObject(CONST_SOURCE);
				if (source == null) {
					sendError(message, CONST_SOURCE + " is required");
				// id is optional
				String id = indexable.getString(CONST_ID);
				String route = indexable.getString(CONST_ROUTE);
				IndexRequestBuilder builder = client.prepareIndex(index, type, id).setSource(source.encode());
			bulkRequest.execute(new ActionListener<BulkResponse>(){
				public void onResponse(BulkResponse resp) {
					message.reply(new JsonObject().putString("status", "Took: " + resp.getTookInMillis() + ", Indexed:" + documents.size() + "," + resp.getItems().length + ", Failed: " + resp.hasFailures()));
				public void onFailure(Throwable t) {
					ctx.runOnContext(new Handler<Void>() {
						public void handle(Void event) {
									"Index error: " + t.getMessage(),
									new RuntimeException(t));

Basically I bulk a bunch of Vetx.io JsonObjects into an array and then finally bulk them to Elasticsearch.

bulkRequest.execute() does not return any error. resp.hasFailures is always false. And both my document.size matches resp.getItems.length.

I see the index disk size growing and shrinking while bulking as if there's merge activity, but the document count stays consistent. It indexes a few docs and then nada,

Again there's no error reported in logs or from the client side. Also I tried the same operation on existing index that has over 200 million documents. Only a couple of thousand got inserted...

Have I reached some kind of threshold and ES no longer accepting documents?

Some logs also

I rebooted one node to mark a clean event. You will se there's no errors. Also added screen shot of marvel for last hour. You can see output of my job file I have done more then a few thousands records and it's based on the code posted above. hasFailures() returns false.

What do you have your threadpool.bulk_queue_size set at? If its at the default still you might want to increase it.


See above. All the stats are there, pulled off Marvel :slight_smile:
I even posted a screen shot in dropbox.

30 per node and no rejections, no threads queued.

Even Marvel Index Request Rate on the main page is reporting 3000 inserts per second...

Try increasing your threadpool.bulk_queue_size to 1000 (defaults to 50)

No difference.

Anyways nothing is being queued. At least Marvel is not reporting anything queued.

Would you be willing to get on a join.me session? I swear I'm going bonkers over this...

So I'm going through all the Marvel stats.

In INDICES STORE DELETED DOCUMENTS there's as many deletions as there is inserts from my bulk job...

Is ES evicting my documents? Have I reached some threshold?

The only other thing I can think of is look closely at your IDs. If your document IDs are colliding in elasticsearch, it will count that as a deletion. Make sure all of your IDs are unique.

1 Like

Yep. I swear I didn't change anything in my bulk logic, but who knows hehe! :stuck_out_tongue:

Yep! Nothing to see here. My stupidity. I had accidentally ticked something on my JMeter script that generates the data to cause it to recycle the ids per thread :stuck_out_tongue: