Hi,
I am running 1.5.2 elasticsearch. My strategy to index data is to create new index, push all data, apply settings to index (example number_of_replicas, alias).
Sometimes I get weird exception in server logs during optimize call
[action.admin.indices.optimize] [node] [my_index][2], node[DDoIG8BySM6AIWrmiZDwyw], [P], s[STARTED]: failed to execute [OptimizeRequest{maxNumSegments=5, onlyExpungeDeletes=false, flush=true, upgrade=false}]
org.elasticsearch.index.engine.OptimizeFailedEngineException: [my_index][2] force merge failed
at org.elasticsearch.index.engine.InternalEngine.forceMerge(InternalEngine.java:791)
at org.elasticsearch.index.shard.IndexShard.optimize(IndexShard.java:684)
at org.elasticsearch.action.admin.indices.optimize.TransportOptimizeAction.shardOperation(TransportOptimizeAction.java:110)
at org.elasticsearch.action.admin.indices.optimize.TransportOptimizeAction.shardOperation(TransportOptimizeAction.java:49)
at org.elasticsearch.action.support.broadcast.TransportBroadcastOperationAction$AsyncBroadcastAction$1.run(TransportBroadcastOperationAction.java:171)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.elasticsearch.index.engine.FlushNotAllowedEngineException: [my_index][2] recovery is in progress, flush with committing translog is not allowed
at org.elasticsearch.index.engine.InternalEngine.flush(InternalEngine.java:601)
at org.elasticsearch.index.engine.InternalEngine.forceMerge(InternalEngine.java:782)
code which reproduce this is
// creating index with optimized settings for indexing
Settings indexSettings = ImmutableSettings.settingsBuilder()
.put("number_of_shards", 5)
.put("number_of_replicas", 0)
.put("refresh_interval", "-1")
.put("merge.policy.merge_factor", "50")
.put("index.merge.scheduler.max_thread_count", "1")
.build();
client.admin().indices()
.prepareCreate(newIndexName)
.setSettings(indexSettings)
.addMapping(mappingName, getFieldMapping())
.execute().actionGet();
// push data to elastic server
bulkProcessor.add(new IndexRequest(esIndexName, esMappingName, id).source(source)); // many times ~10mln
// set index settings
Settings indexSettings = ImmutableSettings.settingsBuilder()
.put("number_of_replicas", 2)
.put("refresh_interval", "5s")
.build();
client.admin().indices().prepareUpdateSettings(newIndexName).setSettings(indexSettings).execute().actionGet();
client.admin().indices().prepareOptimize(newIndexName).setMaxNumSegments(5).execute();
I guess that problem is that I set number_of_replicas, than while elastic replicating data, I call optimize command. Can be there a problem? If yes exists some way to know when index is prepared for optimize ?