Because most of the memory is number of document bound used (because of the
multi values and the current ways things are represented), you can do
either increase the memory on each machine, or have more machines (and
enough shards to span them).
On Wed, May 9, 2012 at 3:36 PM, Andy Wick andywick@gmail.com wrote:
So if I reduce the max number of elements in multi value that would help?
Assuming I'm not CPU bound (because of low query rate) and only memory
bound, is it better to add more memory to existing machines or add more
machines? Example: Should I go from 10x16G to 20x16G machines or 10x32G
machines? I assume the 10x32G because of overhead?I might be getting 10x64G machines. Should I run 2 nodes (maybe 20G each)
so that I don't hit Java GC issues?Thanks,
AndyOn Wednesday, May 9, 2012 4:35:39 AM UTC-4, kimchy wrote:
Fields that have multi data values can contribute greatly to the memory
used. There is nothing really to do about it in terms of improving it,
except for scaling out or up to increase the memory available (and have
enough shards to span the nodes).On Sat, May 5, 2012 at 11:35 PM, Andy Wick andywick@gmail.com wrote:
I should have mentioned that two other things I did with the hopes of
reducing memory was to turn off replicates for this index and to disable
the all field.Thanks,
Andy