I have a server running Logstash, Kibana and ElasticSearch 6.8, all in the same server.
I was setting up the ILM for the indexes, just to set a retention and executing force_merge for the old indices (older than 1 day) and "suddenly" I've started to receive the error below at logstash log.
[2020-01-14T13:20:55,327][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 403 ({"type"=>"cluster_block_exception", "reason"=>"blocked by: [FORBIDDEN/8/index write (api)];"})
[2020-01-14T13:20:55,327][INFO ][logstash.outputs.elasticsearch] Retrying individual bulk actions that failed or were rejected by the previous bulk request. {:count=>80}
I've already removed the ILM from all the indexes, but still receiving this error.
POST _all/_ilm/remove
I'm not running out of disk space
[2020-01-14T13:23:54,864][INFO ][o.e.e.NodeEnvironment ] [I8x9STN] using [1] data paths, mounts [[/var (/dev/mapper/vg00-var)]], net usable_space [196.4gb], net total_space [399.9gb], types [ext4]
See below the _cluster/health output:
{
"cluster_name" : "elasticsearch",
"status" : "yellow",
"timed_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes" : 1,
"active_primary_shards" : 3583,
"active_shards" : 3583,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 115,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number" : 96.89021092482423
}
The server has 64GB of ram, logstash has setup 2GB of heap and elasticsearch with 16GB of heap.
I'm able to insert data into a teste indice with kibana.
The server has 751 indices and 3698 shards, but it was much more, I'was setting the ILM and Templates for each indice prefix to have the properly number_of_shards and retention to keep the server organized and avoid disk space issues.
Following below the json of one ILM Policies, and I'm not using rollover since I have just one server.
{
"policy": {
"phases": {
"hot": {
"min_age": "0ms",
"actions": {
"set_priority": {
"priority": 100
}
}
},
"warm": {
"min_age": "2d",
"actions": {
"forcemerge": {
"max_num_segments": 1
},
"set_priority": {
"priority": 50
}
}
},
"delete": {
"min_age": "30d",
"actions": {
"delete": {}
}
}
}
}
}
Thanks