PUT .kibana/_settings { "index": { "blocks": { "read_only_allow_delete": "false" } } }

Good afternoon everyone,

So today I was running my config with logstash and everything was working great and then all of a sudden I got this error message with Kibana. It just repeats over and over and over again.

[2019-01-15T17:12:11,362][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 403 ({"type"=>"cluster_block_exception", "reason"=>"blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"})
[2019-01-15T17:12:11,363][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 403 ({"type"=>"cluster_block_exception", "reason"=>"blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"})
[2019-01-15T17:12:11,363][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 403 ({"type"=>"cluster_block_exception", "reason"=>"blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"})
[2019-01-15T17:12:11,363][INFO ][logstash.outputs.elasticsearch] Retrying individual bulk actions that failed or were rejected by the previous bulk request. {:count=>125}

I have plenty of HD space left 1TB of space left still. I am unsure of what I need to be doing to get it to work again. I have tried restarting, restarting services and so on. I even found a solution someone posted that was:

PUT .kibana/_settings
{
"index": {
"blocks": {
"read_only_allow_delete": "false"
}
}
}

I tried that and it didnt work.

and I still get the same error. I have about 500 more gigs of CSV files that need to be indexed.

Are you indexing into the .kibana index? If not then running that curl won't impact any other index that you may be indexing into

I am using Kibana

This typically indicates that you have exceeded the flood stage watermark and that your storage is almost full. Can you please show the full output of df -k (assuming linux) as well as how your Elasticsearch data path is configured (may depend on how it was installed)? My guess is that your data path might not be what you expect it to be.

How can I tell this?

Which operating system are you using? How did you install Elasticsearch?

I am using CentOS 7 and I installed it with the packages on the site. everything is pretty much default setup except for my config file.

I should also add its in ESXi 6.0 container.

What is the output of df -k on the host? Have you made any changes to the data path in elasticsearch.yml file?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.