Estimating index.cache.field.max_size

Hi,

I have a question: how to estimate index.cache.field.max_size? Maybe there
is some kind of equation based on RAM memory, number of shards, ... ? I
guess that indices.cache.filter.size is quite simple to set because it can
be a percentage value (default 20% is just fine), but as I understand I
have to set index.cache.field.max_size to avoid OutOfMemory errors because
it is unbounded by default right ?

Thank you.

Best regards.
Marcin Dojwa

--

as I understand I have to set index.cache.field.max_size to avoid
OutOfMemory errors because it is unbounded by default

The only way to combat OOM is to observe your memory usage and simply have
enough RAM to satisfy your requirements.

(I don't think it's easy to guess the optimal value for
index.cache.field.max_size, unless you know very well what you want to
achieve and know very well your data and usage patterns.)

Karel

On Tuesday, December 11, 2012 6:58:11 AM UTC+1, Marcin Dojwa wrote:

Hi,

I have a question: how to estimate index.cache.field.max_size? Maybe there
is some kind of equation based on RAM memory, number of shards, ... ? I
guess that indices.cache.filter.size is quite simple to set because it can
be a percentage value (default 20% is just fine), but as I understand I
have to set index.cache.field.max_size to avoid OutOfMemory errors because
it is unbounded by default right ?

Thank you.

Best regards.
Marcin Dojwa

--

Thank you, that does not sound to be stable :slight_smile: I hope that setting
index.cache.field.type: soft will prevent OOM. Am I wrong? I am just
testing it :slight_smile:

2012/12/11 Karel Minařík karel.minarik@gmail.com

as I understand I have to set index.cache.field.max_size to avoid
OutOfMemory errors because it is unbounded by default

The only way to combat OOM is to observe your memory usage and simply have
enough RAM to satisfy your requirements.

(I don't think it's easy to guess the optimal value for
index.cache.field.max_size, unless you know very well what you want to
achieve and know very well your data and usage patterns.)

Karel

On Tuesday, December 11, 2012 6:58:11 AM UTC+1, Marcin Dojwa wrote:

Hi,

I have a question: how to estimate index.cache.field.max_size? Maybe
there is some kind of equation based on RAM memory, number of shards, ... ?
I guess that indices.cache.filter.size is quite simple to set because it
can be a percentage value (default 20% is just fine), but as I understand I
have to set index.cache.field.max_size to avoid OutOfMemory errors because
it is unbounded by default right ?

Thank you.

Best regards.
Marcin Dojwa

--

--

Thank you, that does not sound to be stable :slight_smile:

What do you mean?

I hope that setting index.cache.field.type: soft will prevent OOM. Am I
wrong? I am just testing it :slight_smile:

Yes :slight_smile: That means you're throwing out all the expensively loaded data into
memory.

Karel

--

Yes :slight_smile: That means you're throwing out all the expensively loaded data
into memory.
Here I can throw it out or get OOM right ? :slight_smile:

I think I understand it all now :slight_smile: When I have field cache type set to
'soft' I no longer get OOM but GC gets almost all the processor time when
heap-memory ends. I set cache.field.expire to 10m now and checking if this
is enough. As I understand I have to set expire and max_size parameters to
avoid getting the heap-memory almost full right ?

Best regards.

2012/12/11 Karel Minařík karel.minarik@gmail.com

Thank you, that does not sound to be stable :slight_smile:

What do you mean?

I hope that setting index.cache.field.type: soft will prevent OOM. Am I
wrong? I am just testing it :slight_smile:

Yes :slight_smile: That means you're throwing out all the expensively loaded data into
memory.

Karel

--

--

Here I can throw it out or get OOM right ? :slight_smile:

I think I understand it all now :slight_smile: When I have field cache type set to 'soft' I no longer get OOM but GC gets almost all the processor time when heap-memory ends. I set cache.field.expire to 10m now and checking if this is enough. As I understand I have to set expire and max_size parameters to avoid getting the heap-memory almost full right ?

It depends on what you want to achieve... The optimal way to use Elasticsearch is to simply have enough RAM to support your requirements (queries, sorting, facets, etc). So, depending on your use case, you might want to to set these limits and pay the cost of re-building your cache. (Most people, though, are balancing their requirements against available resources, tuning the performance of their search requests, and/or limiting the number the documents.)

Karel

--

OK, I understand, thank you Karel.

2012/12/11 Karel Minařík karel.minarik@gmail.com

Here I can throw it out or get OOM right ? :slight_smile:

I think I understand it all now :slight_smile: When I have field cache type set to
'soft' I no longer get OOM but GC gets almost all the processor time when
heap-memory ends. I set cache.field.expire to 10m now and checking if this
is enough. As I understand I have to set expire and max_size parameters to
avoid getting the heap-memory almost full right ?

It depends on what you want to achieve... The optimal way to use
Elasticsearch is to simply have enough RAM to support your requirements
(queries, sorting, facets, etc). So, depending on your use case, you might
want to to set these limits and pay the cost of re-building your cache.
(Most people, though, are balancing their requirements against available
resources, tuning the performance of their search requests, and/or limiting
the number the documents.)

Karel

--

--