On Wed, Nov 2, 2011 at 6:15 PM, Michael Feingold firstname.lastname@example.org:
How much memory should be available for the ELS process? I am running
ELS on a W2k8 server, the index has around 100M documents and the size
of the index is just under 50GB. It looks like the heap size of 2GB is
sufficient but mapped files take another 2.5 GB, so overall memory
allocated for the process is closer to 5GB.
My question is how can I estimate the amount of memory needed for
mapped files based on the size of the indexes? Also is there a way or
a need to control it?
On windows, Lucene (and elasticsearch) will default to Mapped files for
better performance. You can disable that and use simplefs index store type
if you want. The mapped files will take the same size as the actual index
files end up taking. Regarding actual heap, its hard to answer. Lucene
internally will load data into memory to be able to search faster
(basically intervals of terms), and there is the field "cache", which is
basically used for sorting (on something other than score), and faceting
(this is exposed using the node stats and index stats API).