Not enough active copies to meet write consistency of [ALL]

I have set Conssistency level to "writeConsistencyLevel.ALL"
But when trying to write , i get belo exception:

! org.elasticsearch.action.UnavailableShardsException: [app-names][0] Not enough active copies to meet write consistency of [ALL] (have 1, needed 2). Timeout: [1m], request: index
! at$PrimaryPhase.retryBecauseUnavailable( ~
! at$PrimaryPhase.performOnPrimary( ~
! at$PrimaryPhase$1.doRun(
! at ~
! at java.util.concurrent.ThreadPoolExecutor.runWorker( ~[na:1.8.0_51]
! at java.util.concurrent.ThreadPoolExecutor$ ~[na:1.8.0_51]
! at [na:1.8.0_51]

Let me know how this can be fixed.


It means that one of the shards was not available for writing, and the indexing operation therefore failed as per your setting. You either have to wait until all shards are available or relax the consistency level.

I am using the default elasticsearch.yml. All the values are default apart from the cluster name. And my ES is running locally. Can you suggest the changes/ mistakes.

As far as I know this is not the default value, and is what is causing Elasticsearch to reject the write.

If you are running on a single node cluster changing this setting makes no sense as Elasticsearch will never allocate a primary and replica shard to the same node. The replica will therefore remain unassigned until you add another node, and this setting will fail all writes until that happens. You can get around this by changing the number of replicas for the index to 0.

Setting the index.number_of_replicas to 0 doesn't help.

How are you setting it? Are you setting it in elasticsearch.yml or updating the setting for each existing index?

Setting it in elasticsearch.yml

That setting is the default setting and only applied as new indices are created. You will need to update the settings of the existing indices.

I am actually dropping all indices. Then make the settings and bring up the elasticsearch.
And then create the indices. So the changes should be effective. But still it doesn't work. Let me know if there is anything wrong.

What does your configuration look like? When you have recreated indices, what does the output from _cat/indices look like?

health status index pri rep docs.count docs.deleted store.size
green open app-names 5 0 1 0 3kb 3kb

@Christian_Dahlqvist Any suggestions? Still facing the issue. Pls help.

Are you still seeing exactly the same error message when trying to index into the app-names index?

No. But it saves the index and refresh doesn't happen.

What is your refresh interval set to? Are you able to retrieve the indexed record through the GET API?

Yes I am able to get the indexed record though GET API. I don't want to do a explicit refresh by forcing an index refresh.

The default refresh interval is 1 second, so unless you have modified this I do not see why the data should not be made available for search.

I am facing a similar issue in my test cases. I have set the replica's to 0 and number_of_shards to 1. But i get this error "Not enough active copies to meet write consistency of [ALL] (have 1, needed 2)." Please suggest what could be the issue. Version used is 1.7.1.

"have 1 needed 2" means that the document should go to two shards, but only one is available. That tells us that the index you are indexing into has 1 replica, not 0, so for some reason setting replicas to 0 didn't work. Also, the default consistency is QUORUM, not ALL. Using consistency ALL in a single node cluster with an index with default settings would definitely cause this, as default settings are 5 shards and 1 replica, and replicas can not be unassigned on a single node cluster.

Let me know if this helps

This is how i am starting my server.

final Map<String, String> settingsMap = new HashMap<>();
settingsMap.put("index.number_of_shards", "1");
settingsMap.put("index.number_of_replicas", "0");
Settings settings = ImmutableSettings.settingsBuilder().put(settingsMap).build();

    node = nodeBuilder()

Then i put a template mapping to this server and just send data from the test case. Is there any other way to handle this?