How to solve red indices

How to diagnose and solve the red indices?

{
"cluster_name": "clus_1",
"status": "red",
"timed_out": false,
"number_of_nodes": 1,
"number_of_data_nodes": 1,
"active_primary_shards": 3,
"active_shards": 3,
"relocating_shards": 0,
"initializing_shards": 0,
"unassigned_shards": 3,
"delayed_unassigned_shards": 0,
"number_of_pending_tasks": 0,
"number_of_in_flight_fetch": 0,
"task_max_waiting_in_queue_millis": 0,
"active_shards_percent_as_number": 50.0
}

health status index                 pri rep docs.count docs.deleted store.size pri.store.size 
red    open   l-2016.02.06            1   0       
red    open   l-2016.03.15            1   0       
yellow open   .kibana                 1   1         11            1     49.7kb         49.7kb 
green  open   l-2016.03.14            1   0       3929            0      1.4mb          1.4mb 
green  open   l-2016.02.05            1   0       1701            0      1.1mb          1.1mb

Check your ES logs, they should say something.

The ES log does not reveal much.

[2016-03-23 17:49:20,284][WARN ][bootstrap                ] unable to install syscall filter: syscall filtering not supported for OS: 'Windows 10'
[2016-03-23 17:49:21,125][INFO ][node                     ] [node_1] version[2.1.1], pid[5084], build[40e2c53/2015-12-15T13:05:55Z]
[2016-03-23 17:49:21,126][INFO ][node                     ] [node_1] initializing ...
[2016-03-23 17:49:21,197][INFO ][plugins                  ] [node_1] loaded [], sites []
[2016-03-23 17:49:21,224][INFO ][env                      ] [node_1] using [1] data paths, mounts [[OS (C:)]], net usable_space [377.5gb], net total_space [448.2gb], spins? [unknown], types [NTFS]
[2016-03-23 17:49:26,257][INFO ][node                     ] [node_1] initialized
[2016-03-23 17:49:26,257][INFO ][node                     ] [node_1] starting ...
[2016-03-23 17:49:26,674][INFO ][transport                ] [node_1] publish_address {127.0.0.1:9300}, bound_addresses {127.0.0.1:9300}
[2016-03-23 17:49:26,691][INFO ][discovery                ] [node_1] clus_1/m01X78DnTX2iztJDALuTXA
[2016-03-23 17:49:30,735][INFO ][cluster.service          ] [node_1] new_master {node_1}{m01X78DnTX2iztJDALuTXA}{127.0.0.1}{127.0.0.1:9300}, reason: zen-disco-join(elected_as_master, [0] joins received)
[2016-03-23 17:49:30,799][INFO ][http                     ] [node_1] publish_address {127.0.0.1:9200}, bound_addresses {127.0.0.1:9200}
[2016-03-23 17:49:30,800][INFO ][node                     ] [node_1] started
[2016-03-23 17:49:31,408][INFO ][gateway                  ] [node_1] recovered [5] indices into cluster_state

That's unusual.
Did you have a node problem that forced a shutdown or something similar?

Hi @warkolm

I did not have a node problem.
However, I may forced a shutdown through killing the window that was running elasticsearch.

I found this entry through an old log. It may be related to the problem.

[2016-03-19 19:13:17,015][INFO ][node                     ] [node_1] stopping ...
[2016-03-19 19:13:17,031][WARN ][netty.channel.DefaultChannelPipeline] An exception was thrown by an exception handler.
java.util.concurrent.RejectedExecutionException: Worker has already been shutdown 

Nevertheless, how can I solve this problem?
Please advice me if there is a better way
I can delete the indices from ES.
Then I have to tell Logstash to re-read these particular indices by editing the config file.

You will probably need to delete those indices and then reindex.