Initially the application stated running on 1Gig or Ram, next day we upgraded to 2Gig and then 4GB and still continues to consume more and more RAM over time.
All it was indexing is just a 32.4 MB worth of data all together, have shared the index screenshot from the server.
Please don't post pictures of text or code. They are difficult to read, impossible to search and replicate (if it's code), and some people may not be even able to see them
Then that's outside the scope of what Elasticsearch manages, and you will need to consult your OS documentation for how to limit that (if that is what you want to do).
But it's not a bad thing, it's the OS caching commonly accessed files, which are related to the Elasticsearch process. It's why we recommend leaving half of the system memory, so this can happen and improve the speed of Elasticsearch.
We don't see any performance issues when it comes to elastic, but it takes almost 2GB of memory for handling 140 MB worth of Indexes. Is this a normal behavior, or is there any setting which we can tweak to optimize the memory usage.
(Other than heap memory). Please advise.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.