Error: Index Open

Hi Team,
I am trying to open an old filebeat index but as soon as i opened it my elasticsearch cluster become "red" due to unassigned shards. This results elasticsearch and blocking logstash as well.

On 1-Sep-2016 for which i am trying to open the index i was using default shards and replicas which i now changed to shard=2 and replica=1. Am i getting error because of this? what else can cause unassigned shards? Please suggest.

============================================
I checked elasticsearch master node logs and couldn't find any error:
[2016-10-14 10:52:59,214][INFO ][cluster.metadata ] [es-master-node] opening indices [[filebeat-2016.09.01]]
[2016-10-14 10:53:15,016][INFO ][cluster.metadata ] [es-master-node] closing indices [[filebeat-2016.09.01]]
[

More Information:

GET filebeat-2016.09.01 => close filebeat-2016.09.01

GET _cat/indices/filebeat-2016.09.01 =>

"settings": {
"index": {
"creation_date": "1472688048187",
"uuid": "F1OO5S6jSyqbSviMkn4HAw",
"number_of_replicas": "1",
"number_of_shards": "5",
"version": {
"created": "2030199"
}
}
}
,

POST filebeat-2016.09.01/_open
GET /_cluster/health?pretty=true =>
{
"cluster_name": "prod-elk",
"status": "red",
"timed_out": false,
"number_of_nodes": 6,
"number_of_data_nodes": 4,
"active_primary_shards": 130,
"active_shards": 262,
"relocating_shards": 0,
"initializing_shards": 0,
"unassigned_shards": 10,
"delayed_unassigned_shards": 0,
"number_of_pending_tasks": 0,
"number_of_in_flight_fetch": 0,
"task_max_waiting_in_queue_millis": 0,
"active_shards_percent_as_number": 96.32352941176471
}

POST filebeat-2016.09.01/_close

GET /_cluster/health?pretty=true =>

{
"cluster_name": "prod-elk",
"status": "green",
"timed_out": false,
"number_of_nodes": 6,
"number_of_data_nodes": 4,
"active_primary_shards": 130,
"active_shards": 262,
"relocating_shards": 0,
"initializing_shards": 0,
"unassigned_shards": 0,
"delayed_unassigned_shards": 0,
"number_of_pending_tasks": 0,
"number_of_in_flight_fetch": 0,
"task_max_waiting_in_queue_millis": 0,
"active_shards_percent_as_number": 100
}

Regards...