Getting process cluster event timeout exceptions while bulk indexing with error message failure to put mappings on indices

Hey everyone,

I'm indexing about 8 million documents using a bulk processor which has concurrent requests set to 2 and max size set to 5 mb. The cluster has 2 nodes with 2 gig memory each.

I'm getting the following error:

[2015-05-22 12:40:06,000][DEBUG][action.admin.indices.mapping.put] failed to put mappings on indices [index_1432298356915]], type [stuff] org.elasticsearch.cluster.metadata.ProcessClusterEventTimeoutException: failed to process cluster event (put-mapping [stuff]) within 30s at org.elasticsearch.cluster.service.InternalClusterService$2$1.run(InternalClusterService.java:270) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745)

These 8 million documents are spread accross a 1000 indices. The interesting thing is that previously I was indexing 13 million documents across 50 indices and that did not result in the above exception.

Could anyone shed some light on what exactly is going on here. Is there a way to configure this timout value? I've read some threads on the google groups about a master_timeout that can be set on every request to fix this, could someone give an example of how that can be done?

Any help would be greatly appreciated

Thanks

Are you monitoring your cluster? Given that number of indices I'd say you are seeing resource pressures, which may manifest in this manner.

Hi, may I ask you that with how many indices the cluster could work well?

This thread is really, really old. Please start a new one with your question.