Deleting index setting

Hi All,

I added a index setting using below curl command

curl -XPUT 'localhost:9200/<myindex>/_settings' \
    -d '{"index.routing.allocation.disable_allocation": false}'

Now I want to remove this setting and bring back the index to it's default state. I tried with below command

curl -XDELETE 'localhost:9200/<myindex>/_settings' -d '{"index.routing.allocation.disable_allocation"}'  

but I am getting below exception

{"error":"TypeMissingException[[_all] type[[_settings]] missing: No index has the type.]","status":404}

How can I remove this setting safely and revert back to default?

Regards,

By using true instead of false:

 curl -XPUT 'localhost:9200/<myindex>/_settings' \
    -d '{"index.routing.allocation.disable_allocation": true}'

What is the default value of index.routing.allocation.disable_allocation? Is it true?

It's false, because you want want things to be allocated.

Basically the issue was, one of my index went to bad state and all the shards where UNASSIGNED. So I executed below command to start the shards

curl -XPOST 'localhost:9200/_cluster/reroute' -d '{
     "commands": [
        {
            "allocate": {
                "index": "<Myindex>",
                "shard": '4',
                "node": "MyNode",
                "allow_primary": true
          }
        }
    ]
  }'

So what all other things I need to take care of before executing this command? What I did was I set index.routing.allocation.disable_allocation to false and executed reroute command.

See https://www.elastic.co/guide/en/elasticsearch/reference/1.6/indices-update-settings.html:

index.routing.allocation.disable_allocation
Disable allocation. Defaults to false. Deprecated in favour for index.routing.allocation.enable

Not sure which version you're using. This documentation is from 1.6

I am using 1.4.4
Is it safe to run reroute command when index is in open state?

Yes. Elasticsearch will keep using the local copy until re-routing is complete.

Thank you very much Aaron. I have asked this question on another thread. But I like to ask it again here.
Why primary shards goes to UNASSIGNED state after starting Elasticsearch? This happened couple of times in our production environment and this is scary.

Any help will be greatly appreciated.

Thanks.

There should be something in your logs about why.
It could be disk space issues?

That question should stay in its own thread. Let's please try to keep topics uniform.

Hi Mark,
It is 2TB disk and only 3.1% space has been used. I have gone through all the log's but I am unable to get any information.

Below is the only log I got

Failed to save log: org.elasticsearch.action.UnavailableShardsException: [logindex][4] Primary shard is not active or isn't assigned is a known node. Timeout: [1m], request: index .....
    at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction.retryBecauseUnavailable(TransportShardReplicationOperationAction.java:785) [elasticsearch-1.4.4.jar:]
    at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction.doStart(TransportShardReplicationOperationAction.java:402) [elasticsearch-1.4.4.jar:]
    at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction$3.onTimeout(TransportShardReplicationOperationAction.java:501) [elasticsearch-1.4.4.jar:]
    at org.elasticsearch.cluster.ClusterStateObserver$ObserverClusterStateListener.onTimeout(ClusterStateObserver.java:239) [elasticsearch-1.4.4.jar:]
    at org.elasticsearch.cluster.service.InternalClusterService$NotifyTimeout.run(InternalClusterService.java:520) [elasticsearch-1.4.4.jar:]
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [rt.jar:1.8.0_25]
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [rt.jar:1.8.0_25]
    at java.lang.Thread.run(Thread.java:745) [rt.jar:1.8.0_25]

I notice that you changed your earlier response from using 1.7.1 to 1.4.4. Are all of the nodes in your cluster running 1.4.4, or are some running 1.7.1?

Sorry for the typo. By mistake I took the version from a separate cluster.
It is actually 1.4.4. I have cross verified both client and server. Both are having 1.4.4

Please start a new topic for the new question, or use the other thread you mentioned. Thanks.