When elasticsearch sees the call for doc['f1'] it caches all values for
this field to improve performance of the future calls. If you don't have
enough memory to cache all these values, you might want to consider
replacing document fields with stored fields (_field['f1']) or even source
(_source['f1']). It will be slower than using document fields, especially
if you call this script on a large number of documents, but it will have
much lighter memory requirements.
index.cache.field.* should be in config/elasticsearch.yml
On Monday, November 5, 2012 8:15:15 PM UTC-5, Logan wrote:
Should I bet setting index.cache.field.* settings in
bin/service/elasticsearch.conf or config/elasticsearch.yml?
On Monday, November 5, 2012 6:00:22 PM UTC-7, Sushant Shankar wrote:
Yes we are using several calls to structures like this in a custom
'doc['f1'] + doc['f2] .. >= 2',
often in addition to filter that can have up to 1000 terms.
On Monday, November 5, 2012 4:47:19 PM UTC-8, Igor Motov wrote:
Are you using document fields (structures like doc['my_field']) to
access the data?
On Monday, November 5, 2012 7:34:12 PM UTC-5, Sushant Shankar wrote:
It is possible that we do not have enough memory for this. The odd
thing is that we're not using facets. We are issuing CPU-intensive custom
script queries (as we need to compute operations like the sum of different
On Monday, November 5, 2012 4:19:38 PM UTC-8, Igor Motov wrote:
The index.cache.field.max_size and index.cache.field.expire settings
are applicable only to the "resident" cache type. The "soft" cache type is
garbage collected in response to memory demand. If all heap is getting used
even with soft cache it might be a good indication that you simply don't
have enough memory on the nodes to perform these queries. What type of
queries are causing OOM? Are you doing any faceted searches on multivalued
On Monday, November 5, 2012 5:58:44 PM UTC-5, Logan wrote:
I'm having problems controlling OOME errors on my 6 node CentOS 5.6
cluster. I am currently running elasticsearch-0.19.8 using the service
wrapper and java version "1.6.0_25". I have set the following
index.cache.field settings in bin/service/elasticsearch.conf and when that
failed config/elasticsearch.yml but they seem to be ignored as I never see
any field cache evictions in bigdesk and field cache will eventually eat up
all the heap memory when certain searches are performed.
I'm not entirely sure that filed.type: soft isn't working as
sometimes the field cache will drop to zero after a GC but it seems to only
work when the cluster is idle. But field.expire: and
max_size: definitely seem to have no effect.
Am I going about this the right way? What config should I be setting
these values in? Is there a good way to verify what settings are currently
in effect on the cluster?