What is search context and how and when is it created ? Is there a good number beyond which if the number of search contexts go, it can be alarming ?
SearchContext is the state that is maintained for the duration of a search operation on a shard. The more concurrent search operations you have the more SearchContext objects are active at any one time.
Is there a way to estimate the heap size taken up by SearchContext objects or maybe an API that outputs the amount of space taken up by these objects.
We have around 100-200 SearchContexts objects as shown by the NodeStats API, was wondering how much they are contributing to my cluster's heap space. During a peak we even saw around 1500 open search context, was not sure if this was too much or normal ?
These aren't normally a concern - other interim state like aggregation buckets tend to be the main cause for worry.
By aggregation buckets are you referring to field-data cache or something else ?
field-data cache is a fixed-cost for all aggs (now largely superseded by doc values). It's where we lookup values.
Aggregation buckets are a per-request cost containing the interim state e.g. calculating the movie counts for all actors before returning the substantially trimmed final result of "top 10 movie stars".
Is there an API where we can see the portion of the heap consumed by these buckets, we have a mismatch wherein we see that total JVM heap consumed is not sum of field-data + filter-cache + ID_cache + Shard Query Cache + Segment in Memory + Merges in Memory + everything that shows up in the "indices" bucket in Node Stats output.
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.