Hi,
I have run the following test to analyze the effect of using frozen indices of JVM heap usage.
I have a single node cluster with the following properties:
JVM Heap: 6 GB
Number of indices: 200
Number of shards: 200
Number of the freezed indices: 200 (all indices are freezed!)
Size of each shard: 3 GB
Used JVM Heap: 3.1 GB
I run a simple query on the Elastic:
{ "query": { "match_all": {} } }
Running this query results in huge usage of JVM heap (reaching to 5.99 GB) for 3-4 minutes. Finally, Elastic was able to return the result, however, based on the Elastic documentation, I expect much less usage of JVM heap:
Elasticsearch builds the transient data structures of each shard of a frozen index each time that shard is searched, and discards these data structures as soon as the search is complete. Because Elasticsearch does not maintain these transient data structures in memory, frozen indices consume much less heap than normal indices
When a search targets frozen indices, each index would be searched sequentially (instead of in parallel), and each index would be loaded, searched, then unloaded again.
thanks for sharing this here. Can you please explain your measurement approach in a little bit more detail? Specifically, I'd be interested in:
How did you measure shard size?
Are these indices force-merged? Did you force-merge them to a single segment?
What do you mean by the metric "Used JVM Heap" that you present at the beginning of your post?
How did you measure JVM heap while running the query?
How many indices did you query? All of them or only a subset?
The documentation talks about memory usage relative to regular indices so I suggest to base heap usage expectations not on absolute numbers but instead on comparisons to memory usage with regular indices vs. with frozen indices.
I have used /_cat/indices API to measure shard size.
No, I have not run the _forcemerge.
I wrote a python script to measure the used JVM heap by requesting /_nodes/stats/jvm and reading the "jvm->mem->heap_used_in_bytes" field.
My script reads the JVM heap periodically (e.g., every 5 seconds).
I run {"query" : {"match_all": {}}} query on all indices.
I will run some tests to compare the ratio of memory usage between the regular indices and frozen indices. But based on my old experiments, I think the Elasticsearch requires only 8 GB to handle a similar query on the same condition.
I had run the _forcemerge query on a freezed index. After unfreezing the index, my _forcemerge request is now being processed. I can also see it using the /_tasks API.
I will try to forcemerge all of my indices and then retry my experiment.
BTW, it would be better if running a _forcemerge query on a freezed index returns a warning or error!
The _forcemerge operation decrease JVM heap usage of my indices dramatically (even when they are not freezed).
Here we have a rule of thumb for computing required JVM heap based on the number of shards:
A good rule-of-thumb is to ensure you keep the number of shards per node below 20 per GB heap it has configured
I just wonder whether there is a similar rule of thumb for forcemerged shards? In other words, I think for forcemerged indices we can have for example 50 shards per GB heap.
It is cluster state size as well as heap usage that drives this recommendation, and it assumes best practices are being followed, which includes forcemerging indices that are no longer written to down to a single segment.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.