Removing node from cluster and replacing it with a new one

Logstash logs:
[2018-10-18T09:49:18,297][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 403 ({"type"=>"cluster_block_exception", "reason"=>"blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"})

This is all I have. No other information in logs on nodes or masters.

@loren This looks like best solution
@DavidTurner In this scenario I would have to do a lot manual work on AWS console which I would like to avoid.

Also I have question regarding number of nodes.
Let's assume one of the EBS would fail so the same situation would happen. This means problem with accessing logs for some period of time.
Currently 9 nodes are c4.4xlarge instances. Would it be a good idea to have more weaker nodes like 24 instances c5.xlarge? This would speed up cluster recovery since less data would be stored on each node.

What is your opinion?