health status index pri rep docs.count docs.deleted store.size pri.store.size
red open l-2016.02.06 1 0
red open l-2016.03.15 1 0
yellow open .kibana 1 1 11 1 49.7kb 49.7kb
green open l-2016.03.14 1 0 3929 0 1.4mb 1.4mb
green open l-2016.02.05 1 0 1701 0 1.1mb 1.1mb
I did not have a node problem.
However, I may forced a shutdown through killing the window that was running elasticsearch.
I found this entry through an old log. It may be related to the problem.
[2016-03-19 19:13:17,015][INFO ][node ] [node_1] stopping ...
[2016-03-19 19:13:17,031][WARN ][netty.channel.DefaultChannelPipeline] An exception was thrown by an exception handler.
java.util.concurrent.RejectedExecutionException: Worker has already been shutdown
Nevertheless, how can I solve this problem?
Please advice me if there is a better way
I can delete the indices from ES.
Then I have to tell Logstash to re-read these particular indices by editing the config file.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.