Error FORBIDDEN/8/index write (api)] and Elasticsearch stop to receiving logstash bulk requests

I have a server running Logstash, Kibana and ElasticSearch 6.8, all in the same server.

I was setting up the ILM for the indexes, just to set a retention and executing force_merge for the old indices (older than 1 day) and "suddenly" I've started to receive the error below at logstash log.

[2020-01-14T13:20:55,327][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 403 ({"type"=>"cluster_block_exception", "reason"=>"blocked by: [FORBIDDEN/8/index write (api)];"})
[2020-01-14T13:20:55,327][INFO ][logstash.outputs.elasticsearch] Retrying individual bulk actions that failed or were rejected by the previous bulk request. {:count=>80}

I've already removed the ILM from all the indexes, but still receiving this error.

POST _all/_ilm/remove

I'm not running out of disk space

[2020-01-14T13:23:54,864][INFO ][o.e.e.NodeEnvironment ] [I8x9STN] using [1] data paths, mounts [[/var (/dev/mapper/vg00-var)]], net usable_space [196.4gb], net total_space [399.9gb], types [ext4]

See below the _cluster/health output:

"cluster_name" : "elasticsearch",
"status" : "yellow",
"timed_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes" : 1,
"active_primary_shards" : 3583,
"active_shards" : 3583,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 115,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number" : 96.89021092482423

The server has 64GB of ram, logstash has setup 2GB of heap and elasticsearch with 16GB of heap.

I'm able to insert data into a teste indice with kibana.

The server has 751 indices and 3698 shards, but it was much more, I'was setting the ILM and Templates for each indice prefix to have the properly number_of_shards and retention to keep the server organized and avoid disk space issues.

Following below the json of one ILM Policies, and I'm not using rollover since I have just one server.

"policy": {
"phases": {
"hot": {
"min_age": "0ms",
"actions": {
"set_priority": {
"priority": 100
"warm": {
"min_age": "2d",
"actions": {
"forcemerge": {
"max_num_segments": 1
"set_priority": {
"priority": 50
"delete": {
"min_age": "30d",
"actions": {
"delete": {}


As a workaroud I've executed the command below.

curl -X PUT -H "Content-Type: application/json" 'http://localhost:9200/_all/_settings' -d '{ "index": { "blocks": { "write": "false" } } }

I found this at the topic below

The doubt that I have is, why it happen? Because of the configuration in the warm phase, without rollover, I've setup force merge and "Timing for warm phase" 1 day from index creation.


1 Like


how old are those Indices? If they are all older than 2 days, maybe elasticsearch tried to merge all at once. While a forcemerge ist active the index size is rising. If there is nothing to merge it will double.

As far as I know from experience.

If all of them start at once your size of indices reaches the limit.

net usable_space [196.4gb], net total_space [399.9gb]

After a threshold elasticsearch locks all indices.

Maybe someone can confirm my theory.

Good afternoon

There are indices up to 90 days. ILM would performe a force merge in a indice that it have already performed this action?

Or the best aproach in my case is to set the warm phase "Timing for warm phase" for a greater threshold and gradually decrease it till reach the desired valeu?


Oh there is maybe another and more likely explanation.

Forcemerge is only executed on read only Indices forcemerge docu

maybe your logstash ist trying to write to an "forcemerged" and therefor "read only" index.

Maybe you have to check the indices if they are still used for writing.

I didn't use forcemerge directly, I've setup it through ILM.

My point is how to avoid to have this problem again.


you can avoid this by not writing into an forcemerged index.

I dont know how your indices look like.

It looks like logstash tries to write into an "old" index. Do you have daily indices like "logstash-2020.01.01"? Or do you use aliases?

I have only daily indexes, and logstash receives data from filebeat instances... That's what sounds weird the fact that after setup ILM for all indexes, new indices stop to receive writes too.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.