I created a Elasticsearch cluster with 4 instance. Elasticsearch 0.90.10
is running all of them. Heap size is 6 GB for all the instances, so
total heap size is 24 GB. I have 5 shard for each index and each shard
has 1 replica. A new index is created for every day, so all indices have
nearly same size.
When total data size reaches around 100 GB (replicas are included), my
cluster begins to fail to allocate some of the shards (status yellow).
After I delete some old indices and restart all the nodes, everything is
fine (status is green). If I do not delete some data, status eventually
So, I am wondering that is there any relationship between heap size and
total data size? Is there any formula to determine heap size based on
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to email@example.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/530CB5FE.80203%40gamegos.com.
For more options, visit https://groups.google.com/groups/opt_out.