Elasticsearch red indices on upgrade

I recently re-installed elastic having upgraded from our previous 1.x version to 2.1.1.

I didn't setup the initial install so not 100% on the configuration. But currently my cluster has become red.

Looking at the cluster and indices I've found the following:

elastic@elk:/apps/elasticsearch-2.1.1/logs$ curl 'localhost:9200/_cat/nodes?v'
host      ip        heap.percent ram.percent load node.role master name
127.0.0.1 127.0.0.1            5          28 0.09 d         *      elasticsearch

elastic@elk:/apps/elasticsearch-2.1.1/logs$ curl 'localhost:9200/_cat/indices?v'
health status index               pri rep docs.count docs.deleted store.size pri.store.size
yellow open   logstash-2016.01.13  10   1     732832            0    145.3mb        145.3mb
yellow open   logstash-2016.01.14  10   1    1758432            0    320.8mb        320.8mb
red    open   logstash-2016.01.15  10   1     381676            0    147.4mb        147.4mb
yellow open   .kibana               1   1          3            0     10.1kb         10.1kb

There's also two elastic processes running, the top is the current process and seems to be erroring. The second process is a zombie.

elastic@elk:/apps/elasticsearch-2.1.1/logs$ ps aux | grep elasticsearch
elastic    2317 36.7  3.4 5940116 420564 pts/1  Sl   10:29   2:31 /usr/bin/java -Xms256m -Xmx1g -Djava.awt.headless=true -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -XX:+DisableExplicitGC -Dfile.encoding=UTF-8 -Djna.nosys=true -Des.path.home=/apps/elasticsearch-2.1.1 -cp /apps/elasticsearch-2.1.1/lib/elasticsearch-2.1.1.jar:/apps/elasticsearch-2.1.1/lib/* org.elasticsearch.bootstrap.Elasticsearch start -d -p PID
elastic    2706  0.0  0.0   8216  2228 pts/1    R+   10:36   0:00 grep --color=auto elasticsearch

Looking at the logs for startup we have a few warnings:

[2016-01-15 10:29:43,609][WARN ][bootstrap                ] Unable to lock JVM Memory: error=12,reason=Cannot allocate memory
[2016-01-15 10:29:43,609][WARN ][bootstrap                ] This can result in part of the JVM being swapped out.
[2016-01-15 10:29:43,609][WARN ][bootstrap                ] Increase RLIMIT_MEMLOCK, soft limit: 65536, hard limit: 65536
[2016-01-15 10:29:43,609][WARN ][bootstrap                ] These can be adjusted by modifying /etc/security/limits.conf, for example:
        # allow user 'elastic' mlockall
        elastic soft memlock unlimited
        elastic hard memlock unlimited
[2016-01-15 10:29:43,609][WARN ][bootstrap                ] If you are logged in interactively, you will have to re-login for the new limits to take effect.

This is an ELK stack. Does anyone know how I can get this to green?

Set default number of replicas to get the yellows green. then deleted the red index which was recreated after as green.

Fixed now.

Your output:

elastic@elk:/apps/elasticsearch-2.1.1/logs$ ps aux | grep elasticsearch
elastic    2317 36.7  3.4 5940116 420564 pts/1  Sl   10:29   2:31 /usr/bin/java -Xms256m -Xmx1g -Djava.awt.headless=true -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -XX:+DisableExplicitGC -Dfile.encoding=UTF-8 -Djna.nosys=true -Des.path.home=/apps/elasticsearch-2.1.1 -cp /apps/elasticsearch-2.1.1/lib/elasticsearch-2.1.1.jar:/apps/elasticsearch-2.1.1/lib/* org.elasticsearch.bootstrap.Elasticsearch start -d -p PID
elastic    2706  0.0  0.0   8216  2228 pts/1    R+   10:36   0:00 grep --color=auto elasticsearch

The second line in that output is not an Elasticsearch process, it is the grep command that you're piping the output of ps to!

They're just warnings, but you should address them so the JVM memory can not be swapped out. However, they are not the cause of your issue. Is there anything else in the logs?

It looks like you're running a single node, with daily indices each having ten shards but not very high document count per index, and one replica. The one replica explains why most of your indices are yellow (with the exception of the one red index). If you're only going to be running a single node, you should drop the replica count to zero, and you should definitely reduce the number of shards per index. If those index sizes are representative, one shard per index strikes me as perfectly reasonable.

Hi Jason thanks for the response!

Yeah looking at the grep line, it is obvious now!

I am running a single node so have dropped the replica count and they all turned green which is great.

I deleted the red indices which were then recreated - and turned green. So all works now.

On the shard count, the index sizes aren't representative right now as we haven't started routing most of the data in yet - we're testing how this all works first. If we get performance issues down the line I might look at changing the count.

Thanks!