I have elasticsearch cluster with 2 nodes. We are persisting e-commerce customer behavior events like product_view, add_to_basket etc in an index named events. Daily we have 30gb of data we are doing hourly indexing.. (1 shard, 1 replica)
Each server has 32 gb ram, 16 gb is assigned to elasticsearch
The problem is somehow elasticsearch nodes gets memory dump and stops. I don't know how to troubleshoot and find the root cause.
We have some nested type fields in mapping I suspect these mappings somehow effecting the system.
How should I troubleshoot, where to start; can you give me some clues for troubleshooting
we want to keep 3 months data. It was daily index but we thought may be this is causing the problem and changed it to hourly.. but still getting heap dump
Hourly indices seem like overkill. A daily index with 2 primary shards might be better. Having lots of small shards is inefficient as each shard has overhead. Do you have X-Pack monitoring installed?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.