Unable to update index settings

Try to update "read_only_allow_delete" to False, but not working, it always turns out to be True no matter how many attempts have been made.

bash-4.1$ date
Tue Oct 29 23:07:01 EDT 2019
bash-4.1$ curl -X PUT "localhost:9200/env_alerts/_settings?preserve_existing=false" -H 'Content-Type: application/json' -d '{ "index.blocks.read_only_allow_delete": false }'
{"acknowledged":true}
bash-4.1$ date
Tue Oct 29 23:07:30 EDT 2019
bash-4.1$ curl -X GET "localhost:9200/env_alerts/_settings?pretty" -H 'Content-Type: application/json'
{
  "env_alerts" : {
    "settings" : {
      "index" : {
        "number_of_shards" : "5",
        "blocks" : {
          "read_only_allow_delete" : "true"
        },
        "provided_name" : "env_alerts",
        "creation_date" : "1564035494469",
        "number_of_replicas" : "1",
        "uuid" : "YM4QbjxVQbiUbWeec-SxGw",
        "version" : {
          "created" : "6050099"
        }
      }
    }
  }
}

Hi @luok0,

the option preserve_existing means that Elasticsearch will not override existing settings, thus leaving the setting untouched. Please try to remove that and see if it helps.

Hi @luok0,

I stand corrected, you set preserve_existing=false, which is default and does not prevent overriding the setting.

A guess is that you have nearly run out of disk space on one of the nodes containing the index and the situation has not been fixed. If so, ES will re-add the index block the next time cluster info is received, which by default runs every 30 seconds.

If a node's disk usage exceed the flood stage watermark, all indices on that node will be marked read-only-allow-delete.

Since 7.4, ES will also remove the index block once disk usage falls below the high disk usage watermark.

If other nodes have available disk space (disk usage below low watermark), ES should relocate shards off nodes with high disk usage (above high watermark).

I recommend to check disk usage and log files on your nodes.

Thanks @HenningAndersen, I cleaned up disk and now there are 72GB, but still it is reporting below errors, seems ES is not removing index block automatically:

[2019-10-30T12:00:23,150][INFO ][logstash.outputs.elasticsearch] Retrying individual bulk actions that failed or were rejected by the previous bulk request. {:count=>1}
[2019-10-30T12:01:27,154][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 403 ({"type"=>"cluster_block_exception", "reason"=>"blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"})
[2019-10-30T12:01:27,154][INFO ][logstash.outputs.elasticsearch] Retrying individual bulk actions that failed or were rejected by the previous bulk request. {:count=>1}


[Dev root @ localhost /opt/elk/elasticsearch-6.5.0/data/nodes/0/indices]
# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/rootvg-lv_root
                         97G   20G   72G  22% /
tmpfs                    1.9G  0  1.9G 0% /dev/shm
/dev/sda1             240M  121M  107M  54% /boot

Hi @luok0,

the path name of your folder contains 6.5 in it. Until 7.4, it is necessary to manually remove the block using the index settings API (that you attempted earlier).

Did you try removing the index block after sorting out the disk space issue?

Also, is this a one node cluster? Otherwise, this could be a problem with other nodes in the cluster.

Hi @HenningAndersen, it's only a one node cluster and issue is fixed by removing block manually.
Many thanks for you help:)

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.