When I refresh field list, it fails because of 【blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];: [cluster_block_exception] blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];】

Hi,
When I refreshed field list in Management --> Kibana --> Index Patterns, it failed because of 【blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];: [cluster_block_exception] blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];】. The detailed error information is as follows.
</>
Error: blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];: [cluster_block_exception] blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];
at http://xxx:xxx/bundles/commons.bundle.js:3:428829
at processQueue (http://xxx:5601/bundles/vendors.bundle.js:133:134252)
at http://xxx:xxx/bundles/vendors.bundle.js:133:135201
at Scope.$digest (http://xxx:xxx/bundles/vendors.bundle.js:133:146077)
at Scope.$apply (http://xxx:xxx/bundles/vendors.bundle.js:133:148856)
at done (http://xxx:xxx/bundles/vendors.bundle.js:133:101124)
at completeRequest (http://xxx:xxx/bundles/vendors.bundle.js:133:106024)
at XMLHttpRequest.xhr.onload (http://xxx:xxx/bundles/vendors.bundle.js:133:106783).
</>

For solving the aboved issue, I executed the following API via Dev Tools.
PUT /zipkin*/_settings
{
"index.blocks.read_only_allow_delete": null
}
But this issue has been still existed. How can I solve this issue?

I set up Elasticsearch into 3 servers respectively. And Kibana was set up into one of 3 servers. The following is the disk space information in the server which set up Elasticsearch and Kibana.

1 Like

This often means that you are running out of disk space and that the data path assigned to Elasticsearch is over 95% full, which means that the flood stage watermark has triggered and made all indices read-only.

I set up Elasticsearch into 3 servers respectively. And Kibana was set up into one of 3 servers. The data path assigned to Elasticsearch is 【/xxx/data】.

The following is the disk space information of the server under the folder 【/xxx/data】 of this server, which set up Elasticsearch and Kibana.


The aboved picture demostrates that these is no over 95% full about the disk space. So how to solve my issue..?

Is this the case on all 3 servers?

The version of Elasticsearch in 3 servers is 6.3.0.
The data path assigned to Elasticsearch in 3 servers is respectively 【/xxx/data】.

The below is the disk space information of the 2nd server under the folder 【/xxx/data】 of the 2nd server, which set up the 2nd Elastichsearch.

The below is the disk space information of the 3rd server under the folder 【/xxx/data】 of the 3rd server, which set up the 3rd Elastichsearch.

I replied to you about 16 hours ago. Sorry for my hurry. Look forward to your soon reply.

Is there anything in the logs that indicates what happened?

Where are the logs, which you pointed? Are they located in some folder which is pointed by 【path.logs】 in the elasticsearch.yml config file?

What are the log files names, which I should provide?

I would recommend unblocking the indices as outlines in the documentation I linked to earlier. If they then go back to read-only status you should probably see something in the my-elasticsearch.log file on one of the hosts.

This is the log content in the my-elasticsearch.log in the 1st Elasticsearch server:

This is the log content in the my-elasticsearch.log in the 2nd Elasticsearch server:

This is the log content in the my-elasticsearch.log in the 3rd Elasticsearch server:
[2019-02-14T01:00:00,001][INFO ][o.e.x.m.e.l.LocalExporter] cleaning up [2] old indices
[2019-02-14T01:00:00,031][INFO ][o.e.c.m.MetaDataDeleteIndexService] [node-3] [.monitoring-kibana-6-2019.02.06/4xwQZb8fTU69ZTrrIy4q1w] deleting index
[2019-02-14T01:00:00,031][INFO ][o.e.c.m.MetaDataDeleteIndexService] [node-3] [.monitoring-es-6-2019.02.06/Hfv2tQDZQWakhdQ1k5HQiQ] deleting index
[2019-02-14T01:23:00,001][INFO ][o.e.x.m.MlDailyMaintenanceService] triggering scheduled [ML] maintenance tasks
[2019-02-14T01:23:00,002][INFO ][o.e.x.m.a.TransportDeleteExpiredDataAction] [node-3] Deleting expired data
[2019-02-14T01:23:00,009][INFO ][o.e.x.m.a.TransportDeleteExpiredDataAction] [node-3] Completed deletion of expired data
[2019-02-14T01:23:00,009][INFO ][o.e.x.m.MlDailyMaintenanceService] Successfully completed [ML] maintenance tasks
[2019-02-14T08:00:01,094][INFO ][o.e.c.m.MetaDataCreateIndexService] [node-3] [.monitoring-es-6-2019.02.14] creating index, cause [auto(bulk api)], templates [.monitoring-es], shards [1]/[0], mappings [doc]
[2019-02-14T08:00:01,144][INFO ][o.e.c.m.MetaDataUpdateSettingsService] [node-3] updating number_of_replicas to [1] for indices [.monitoring-es-6-2019.02.14]
[2019-02-14T08:00:01,152][INFO ][o.e.c.m.MetaDataUpdateSettingsService] [node-3] [.monitoring-es-6-2019.02.14/28TgxEaoSwqJ7wLZ1he_Bw] auto expanded replicas to [1]
[2019-02-14T08:00:01,258][INFO ][o.e.c.m.MetaDataCreateIndexService] [node-3] [idt7-sit-mcs-2019.02.14] creating index, cause [auto(bulk api)], templates , shards [5]/[1], mappings
[2019-02-14T08:00:01,589][INFO ][o.e.c.m.MetaDataMappingService] [node-3] [idt7-sit-mcs-2019.02.14/3a1otDYlSW-Ab81aHNe-mQ] create_mapping [doc]
[2019-02-14T08:00:01,659][INFO ][o.e.c.m.MetaDataCreateIndexService] [node-3] [log-idt7-sit-msc-2019.02.14] creating index, cause [auto(bulk api)], templates [idt_logs], shards [5]/[1], mappings [default]
[2019-02-14T08:00:02,096][INFO ][o.e.c.m.MetaDataMappingService] [node-3] [log-idt7-sit-msc-2019.02.14/57N99nsfR86sikjDNAvZTg] create_mapping [doc]
[2019-02-14T08:00:02,273][INFO ][o.e.c.m.MetaDataCreateIndexService] [node-3] [idt7-mcs-2019.02.14] creating index, cause [auto(bulk api)], templates , shards [5]/[1], mappings
[2019-02-14T08:00:02,540][INFO ][o.e.c.m.MetaDataMappingService] [node-3] [idt7-mcs-2019.02.14/1QbKGdo8Sgm9h8pyYFScmQ] create_mapping [doc]
[2019-02-14T08:00:02,611][INFO ][o.e.c.m.MetaDataCreateIndexService] [node-3] [log-idt7-msc-2019.02.14] creating index, cause [auto(bulk api)], templates [idt_logs], shards [5]/[1], mappings [default]
[2019-02-14T08:00:03,069][INFO ][o.e.c.m.MetaDataMappingService] [node-3] [log-idt7-msc-2019.02.14/ljbejLbfQFqgY6pYHv6gOQ] create_mapping [doc]
[2019-02-14T08:00:03,620][INFO ][o.e.c.r.a.AllocationService] [node-3] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[idt7-mcs-2019.02.14][2]] ...]).
[2019-02-14T08:00:05,446][INFO ][o.e.c.m.MetaDataCreateIndexService] [node-3] [.monitoring-kibana-6-2019.02.14] creating index, cause [auto(bulk api)], templates [.monitoring-kibana], shards [1]/[0], mappings [doc]
[2019-02-14T08:00:05,510][INFO ][o.e.c.m.MetaDataUpdateSettingsService] [node-3] updating number_of_replicas to [1] for indices [.monitoring-kibana-6-2019.02.14]
[2019-02-14T08:00:05,523][INFO ][o.e.c.m.MetaDataUpdateSettingsService] [node-3] [.monitoring-kibana-6-2019.02.14/HxpVMtmPR9e8BluU1K7b4A] auto expanded replicas to [1]
[2019-02-14T08:00:06,244][INFO ][o.e.c.r.a.AllocationService] [node-3] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.monitoring-kibana-6-2019.02.14][0]] ...]).
[2019-02-14T09:10:21,315][INFO ][o.e.c.m.MetaDataIndexTemplateService] [node-3] adding template [kibana_index_template:.kibana] for index patterns [.kibana]
[2019-02-14T09:13:51,349][INFO ][o.e.c.m.MetaDataCreateIndexService] [node-3] [zipkin:span-2019-02-14] creating index, cause [auto(bulk api)], templates [zipkin:span_template], shards [5]/[1], mappings [default, span]
[2019-02-14T09:13:51,623][INFO ][o.e.c.m.MetaDataMappingService] [node-3] [zipkin:span-2019-02-14/YFEQZka_QJOK7_qdKTxBnA] update_mapping [span]
[2019-02-14T09:13:52,179][INFO ][o.e.c.r.a.AllocationService] [node-3] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[zipkin:span-2019-02-14][4]] ...]).
[2019-02-14T09:17:09,728][INFO ][o.e.c.m.MetaDataIndexTemplateService] [node-3] adding template [kibana_index_template:.kibana] for index patterns [.kibana]
[2019-02-14T09:18:23,530][INFO ][o.e.c.m.MetaDataIndexTemplateService] [node-3] adding template [kibana_index_template:.kibana] for index patterns [.kibana]
[2019-02-14T09:26:15,334][INFO ][o.e.c.m.MetaDataIndexTemplateService] [node-3] adding template [kibana_index_template:.kibana] for index patterns [.kibana]
[2019-02-14T10:35:41,289][INFO ][o.e.c.m.MetaDataIndexTemplateService] [node-3] adding template [kibana_index_template:.kibana] for index patterns [.kibana]
[2019-02-14T10:39:39,658][INFO ][o.e.c.m.MetaDataIndexTemplateService] [node-3] adding template [kibana_index_template:.kibana] for index patterns [.kibana]
[2019-02-14T10:52:13,570][INFO ][o.e.c.m.MetaDataIndexTemplateService] [node-3] adding template [kibana_index_template:.kibana] for index patterns [.kibana]
[2019-02-14T10:52:37,376][INFO ][o.e.c.m.MetaDataIndexTemplateService] [node-3] adding template [kibana_index_template:.kibana] for index patterns [.kibana]
[2019-02-14T13:48:53,053][INFO ][o.e.c.m.MetaDataIndexTemplateService] [node-3] adding template [kibana_index_template:.kibana] for index patterns [.kibana]
[2019-02-14T14:13:00,946][INFO ][o.e.c.m.MetaDataIndexTemplateService] [node-3] adding template [kibana_index_template:.kibana] for index patterns [.kibana]

After reading the above logs, how to do in the next step?

I set index.blocks.read_only_allow_delete to false according to the "https://www.elastic.co/guide/en/elasticsearch/reference/6.6/disk-allocator.html", which you recommended. However, it has no effect.

Then I am not sure what is going on. You might need to wait for someone else to help out.

I am not sure whether someone else would appear or not. So may you continue thinking how to solve this issue?

I executed the following command, and then the issue was solved.
PUT /_all/_settings
{
"index.blocks.read_only_allow_delete": null
}
But I do not understand why the issue appears with the enough disk space.

2 Likes

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.