I'm using Kibana to search logs and randomly execute some facets query for
testing. Kibana uses filters mostly on date fields to get logs.
Using logstash, it creates a new index everyday. So, I did some testing by
clearing cache for all indices except the latest 7 & executed a query that
spans across 7 indices. Then, I noted the field_cache and it was around
11GB & total heap used was 22GB. Filter cache was around 3GB.
@Jorg, is it like ES always rebuils the field/filter cache before
processing any query? If that is the case, I can use the above method to
predict the amount of RAM used by a single index.
On Tue, Apr 16, 2013 at 10:55 PM, Jörg Prante firstname.lastname@example.org wrote:
You don't describe the kind of query you execute, so it is hard to give
helpful advice. I assume you use facets or query filters. It depends on the
queries, the number of fields, the cardinality of the values in the fields,
not necessarily the mere data volume or the number of docs.
As a general note, you can't expect that standard CMS GC scales well - it
was designed many years ago for heaps under 8 GB. If you want to ensure GC
runs with low latency on large heaps, consider switching to the more
responsive G1 GC. But nevertheless, using "soft" for cache field is kind of
random strategy. It is just unpredictable how much of your heap will be
used or not. And of course you can always improve the situation by simply
adding more nodes.
Am 15.04.13 19:20, schrieb Abhijeet Rastogi:
I've a situation where ES fails to reduce the heap usage in ES. When this
happens, the logs say something like http://pb.abhijeetr.com/OaaB
Indexing hangs, CPU which generally is at around 300% gets stuck around
100% with nothing happening.
To give you idea about setup, it's a 2 node cluster (2GHz 8 threads, 48GB
RAM) with 3TB of data with each node containin 3.3 billion documents. Each
doc has around 10 fields.
This issue happens only when my data increases more than a certain limit
(around 1.5TB). Is the data too much to be handled by these two nodes? When
I clear caches for indices, everything seems to start working again, so
it's because of cache only. The real question is, why can't GC clear that
cache when ES really needs it for other stuff? I also have
"index.cache.field.type: soft" set in my elasticsearch.yml. What is that I
can do to fix the problem?
You received this message because you are subscribed to the Google Groups
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@**googlegroups.comelasticsearch%2Bunsubscribe@googlegroups.com
For more options, visit https://groups.google.com/**groups/opt_outhttps://groups.google.com/groups/opt_out
Abhijeet Rastogi (shadyabhi)
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to email@example.com.
For more options, visit https://groups.google.com/groups/opt_out.