Elastic Search heap alert

Hi there,

I don't have much knowledge of elastic search memory management but currently, I am having 445 total indexes and my data size is 6.8GB.and having a 4.6GB heap size. I am using an elastic cloud to store my data. but I am getting following alerts from the elastic cloud :

Elastic cloud cluster Heap Memory usage is more than 70%.

How memoery management works in elastic search?
What is the meaning of this alert?
How I can solve the heap issue?

How many shards do you have? Because you probably have too many shards per node.

May I suggest you look at the following resources about sizing:

https://www.elastic.co/elasticon/conf/2016/sf/quantitative-cluster-sizing

And https://www.elastic.co/webinars/using-rally-to-get-your-elasticsearch-cluster-size-right

I have 562 primary and replica shards. and having 3 nodes.

562 shards for 6.8GB of data is a way too much where probably one single shard would be able to hold all your dataset.

Each shard consumes resources. That's why you should reduce that number drastically IMO.

1 Like

I have added mapping for my index which includes : number_of_shards is 2 and number_of_replicas is 1 per index.so how I reduce the number of shards if I am having more than 100 index?

Why do you have so many indices?
Why are you setting number of shards = 2 ?

I am having document of various space. I have created index of each space. so I can added related document in to it. It helps me for query.my documents and index count changes frequently. for that I have so many indices. for shards I don't have much idea.

So how I reduce the number of shards of my all indices also can it solves my heap alert issue?

So how I reduce the number of shards of my all indices also can it solves my heap alert issue?

Depending on your version, you can look at the Shrink API. Shrink Index | Elasticsearch Guide [6.4] | Elastic

I am having document of various space. I have created index of each space.

Not sure what a "space" means. If it's only an attribute then everything can go to the same index which you can filter by space when you are querying it.

also can it solves my heap alert issue?

I don't know. May be. You can also increase the heap size if you don't want to change your architecture.

Hi,

I have reduced the number of shards. Currently, I have 5 shards with 1 index and 3 nodes. My data size is 11.2 GB. and I am not doing any CRUD operation on elasticsearch(my instance is idle) then also my JVM heap size is increasing.

Can you please let me know why my JVM heap size is changing continuously?

Do you have elasticsearch monitoring feature on?
Can you share the memory chart?

Here is the my Elasticsearch performance graph:

is this fine or you want more details. Please let me know.

That kind of saw-tooth pattern is perfectly normal and healthy. Garbage collection is triggered when usage goes above 75%, and then drops.

That's right. I wonder though why he is receiving alerts like

Elastic cloud cluster Heap Memory usage is more than 70%.

May be this is not happening anymore @surajdalvi ?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.