the option preserve_existing means that Elasticsearch will not override existing settings, thus leaving the setting untouched. Please try to remove that and see if it helps.
I stand corrected, you set preserve_existing=false, which is default and does not prevent overriding the setting.
A guess is that you have nearly run out of disk space on one of the nodes containing the index and the situation has not been fixed. If so, ES will re-add the index block the next time cluster info is received, which by default runs every 30 seconds.
If a node's disk usage exceed the flood stage watermark, all indices on that node will be marked read-only-allow-delete.
If other nodes have available disk space (disk usage below low watermark), ES should relocate shards off nodes with high disk usage (above high watermark).
I recommend to check disk usage and log files on your nodes.
Thanks @HenningAndersen, I cleaned up disk and now there are 72GB, but still it is reporting below errors, seems ES is not removing index block automatically:
[2019-10-30T12:00:23,150][INFO ][logstash.outputs.elasticsearch] Retrying individual bulk actions that failed or were rejected by the previous bulk request. {:count=>1}
[2019-10-30T12:01:27,154][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 403 ({"type"=>"cluster_block_exception", "reason"=>"blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"})
[2019-10-30T12:01:27,154][INFO ][logstash.outputs.elasticsearch] Retrying individual bulk actions that failed or were rejected by the previous bulk request. {:count=>1}
[Dev root @ localhost /opt/elk/elasticsearch-6.5.0/data/nodes/0/indices]
# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/rootvg-lv_root
97G 20G 72G 22% /
tmpfs 1.9G 0 1.9G 0% /dev/shm
/dev/sda1 240M 121M 107M 54% /boot
the path name of your folder contains 6.5 in it. Until 7.4, it is necessary to manually remove the block using the index settings API (that you attempted earlier).
Did you try removing the index block after sorting out the disk space issue?
Also, is this a one node cluster? Otherwise, this could be a problem with other nodes in the cluster.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.