I don't have much knowledge of elastic search memory management but currently, I am having 445 total indexes and my data size is 6.8GB.and having a 4.6GB heap size. I am using an elastic cloud to store my data. but I am getting following alerts from the elastic cloud :
Elastic cloud cluster Heap Memory usage is more than 70%.
How memoery management works in elastic search?
What is the meaning of this alert?
How I can solve the heap issue?
I have added mapping for my index which includes : number_of_shards is 2 and number_of_replicas is 1 per index.so how I reduce the number of shards if I am having more than 100 index?
I am having document of various space. I have created index of each space. so I can added related document in to it. It helps me for query.my documents and index count changes frequently. for that I have so many indices. for shards I don't have much idea.
So how I reduce the number of shards of my all indices also can it solves my heap alert issue?
I am having document of various space. I have created index of each space.
Not sure what a "space" means. If it's only an attribute then everything can go to the same index which you can filter by space when you are querying it.
also can it solves my heap alert issue?
I don't know. May be. You can also increase the heap size if you don't want to change your architecture.
I have reduced the number of shards. Currently, I have 5 shards with 1 index and 3 nodes. My data size is 11.2 GB. and I am not doing any CRUD operation on elasticsearch(my instance is idle) then also my JVM heap size is increasing.
Can you please let me know why my JVM heap size is changing continuously?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.