Hi,
I analyzed heap dump taken from Elasticsearch and I can see a lot of space
in heap is occupied by structures and references related to doc values. I
can see tons of hash maps with weak references pointing to objects
representing some values in DV. I was wondering if this is somehow cached
on ES side or it is totally Lucene internal mechanism. Can we influence the
size/number of instances of objects connected to field data?
Hi,
I analyzed heap dump taken from Elasticsearch and I can see a lot of space
in heap is occupied by structures and references related to doc values. I
can see tons of hash maps with weak references pointing to objects
representing some values in DV. I was wondering if this is somehow cached
on ES side or it is totally Lucene internal mechanism. Can we influence the
size/number of instances of objects connected to field data?
Hi,
Thank you for your response. Unfortunately I think we misunderstood. I was
NOT asking if described case can happen because I see it can I was
rather asking about ES internals and if there is any way to optimize such a
case (including source code modifications).
--
Paweł Róg
On Thursday, March 26, 2015 at 3:31:51 AM UTC+1, Mark Walkom wrote:
If you have a lot of unique values and you ask for aggregations looking
for unique values amongst though, then what you are seeing can happen.
On 26 March 2015 at 03:05, Paweł Róg <pro...@gmail.com <javascript:>>
wrote:
Hi,
I analyzed heap dump taken from Elasticsearch and I can see a lot of
space in heap is occupied by structures and references related to doc
values. I can see tons of hash maps with weak references pointing to
objects representing some values in DV. I was wondering if this is somehow
cached on ES side or it is totally Lucene internal mechanism. Can we
influence the size/number of instances of objects connected to field data?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.