Try Bigdesk, it gives you a graphical view of allocated / used memory.
On Oct 9, 2011 7:59 PM, "Yooz" youngmaeng@gmail.com wrote:
Hello,
We are running ES 0.17.7 on 4 16GB boxes. 14GB of memory is locked
(dedicated) for ES. There are several indices, the largest ones range from
50GB to 200GB of data @ 50 - 125 million documents. Currently there is no
issue with memory, but the data size is continually growing. Because the
memory is locked, it is hard to tell from the OS level how much memory ES
actually needs. Is there a parameter exposed in the status API or a rule of
thumb based on the data that shows if ES is running close to the limit?I.e. because ES scales so well, we want to add more capacity ahead of time
in order to avoid errors due to memory issues.Also, Shay has mentioned in several posts that slicing the data up by date
indices (daily, weekly), should minimize the amount of memory used by the
Lucene indices. Is this optimization driven by the use-case? I.e. users
would be most interested in the recent data and would query the past N days
rather than search the entire index? What happens if they consistently
query the entire date range, would this slicing scheme become inefficient
compared to having one massive index?Thanks!
--young