Last week, I indexed 1.2 TB data with one shard/zero replica on a single
node, and ES worked fine.
Later I tried to indexed the same amount of data with 24 shards on a node.
But when my index size went to 480 GB (20GB in each shard), ElasticSearch
crashed and the error message "There is insufficient memory for the Java
Runtime Environment to continue" was thrown.
After that, every time I tried to run ElasticSearch, it soon got crashed
and the same error message appeared repeatedly.
My ES_HEAP_SIZE is 64g (on a machine of 96 GB RAM). Supposed 480GB index
size for 24 shards is not a huge number to ElasticSearch. So how come ES
got OOM on this configuration? What should I do to get rid of this problem
and have more than 100 shards on a single node as this thread
http://goo.gl/pAmKzb did? Thank you.
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to firstname.lastname@example.org.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/07b9d5d7-64d0-413f-9cad-911ad1664233%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.