Hi
Is there any command or way to know about the total RAM usage by elastic client?
Thanks
Aneesh
Hi
Is there any command or way to know about the total RAM usage by elastic client?
Thanks
Aneesh
GET _cat/nodes?v
For options
GET _cat/nodes?help
For 1 GB of data Elastic Uses 9 GB of RAM
Why its like that?
Don't know. It depends on what you are doing, what you are indexing, how....
Is it a problem?
Are you running into trouble?
Can you share the output of the commands you ran?
Hi David,
Thanks for you reply. There have been multiple queries from my side that leads to same issue basically and that is Disk Memory and RAM usage and we are more concerned about RAM.
Now, I would want to know what would be best practice on following points:
a) As in documentation, I got to know that ,it is good to allocate half of RAM set as heap . I have 32gb system and allocated around 13 gb and set setting as locked for JVM.
Will this be good enough if the Data (Elastic data) on Disc goes up to 1TB or even more?
we had heap set as 2 gb earlier and it threw system out of memory exception. I don't want system to crash as the data grows.
b) the default Number of Shards is 5 and replicas as 1, what would be affect of that number increasing or decreasing in terms of memory (disk and RAM) and performance (speed to query)
c) what happens if I don't lock the heap for JVM, will the RAM usage in that case will be according to the data present in data folder of Elastic.
d) I want my whole data into the ELASTIC preloaded. I can design my indices in two ways.
One, More number of indices with lesser data in each index or
two, Less number of indices but more data in each index.
For, example, 100,000 indices with 1000 data rows in each index
or 10,000 indices with 100,000 data rows in each index.
e) There is a Max timeout error appearing, which creates an Index but doesn't add any data to that.
what will be ideal Request Timeout value to be set.
f) In any case what happens if Elastic goes out of memory, will there is I/O operation to get data, that is obviously heavy.
I would be really thankful if I get answers to above queries.
Thanks in Advance.
Arvind
May I suggest you look at the following resources about sizing:
https://www.elastic.co/elasticon/conf/2016/sf/quantitative-cluster-sizing
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.
© 2020. All Rights Reserved - Elasticsearch
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant logo are trademarks of the Apache Software Foundation in the United States and/or other countries.