How to troubleshoot memory problems

Hi,

I have elasticsearch cluster with 2 nodes. We are persisting e-commerce customer behavior events like product_view, add_to_basket etc in an index named events. Daily we have 30gb of data we are doing hourly indexing.. (1 shard, 1 replica)

Each server has 32 gb ram, 16 gb is assigned to elasticsearch

The problem is somehow elasticsearch nodes gets memory dump and stops. I don't know how to troubleshoot and find the root cause.

We have some nested type fields in mapping I suspect these mappings somehow effecting the system.

How should I troubleshoot, where to start; can you give me some clues for troubleshooting


{
  "name" : "node-3",
  "cluster_name" : "dh-elastic",
  "cluster_uuid" : "BIbNjAr9Rs28mWjVOLbYbA",
  "version" : {
    "number" : "5.4.0",
    "build_hash" : "780f8c4",
    "build_date" : "2017-04-28T17:43:27.229Z",
    "build_snapshot" : false,
    "lucene_version" : "6.5.0"
  },
  "tagline" : "You Know, for Search"
}

Why are you using hourly indices? How long do you keep your data?

we want to keep 3 months data. It was daily index but we thought may be this is causing the problem and changed it to hourly.. but still getting heap dump

Hourly indices seem like overkill. A daily index with 2 primary shards might be better. Having lots of small shards is inefficient as each shard has overhead. Do you have X-Pack monitoring installed?

Ok, I will change to daily index with 2 shards;

we don't have x-pack but I am using elastichq for monitoring. I can install xpack what should I check after install?

@Christian_Dahlqvist I have x-pack now..

Elasticsearch "hangs" without any pattern, so where should I start for trouble shooting,
can u guide or direct me

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.