We're using elasticsearch for our search use-case and have an index that serves both regular queries as well as autocompletion.
For autocompletion, I've enabled completion suggester on it.
However, there is growing concern due to memory usage as our data increases.
Here are a few questions I had in that regard:
-
I am trying to compare RAM usage of FST vs overall RAM usage for a given index.
For FST, @spinscale confirmed that the"completion" -> "size_in_bytes"
metric is heap metric in reply to my post here
However, for overall RAM usage of index, I'm not finding any metric from index-stats.
The field in stats response that seems closest to overall memory is"segments"-> "memory_in_bytes"
But if I go by that field, 99.39% of RAM is being captured by FST for our index which is shockingly high.
"completion->size_in_bytes" = 40.74 MB
"segments" ->memory_in_bytes" = 40.99 MB
I know thatnode-stats
give directos-> mem
indication but since we've multiple indices in cluster, its hard to isolate measurements for any single index. -
In case memory due to completion suggester just occupies a lot of heap, is there any emergency way to turn off completion suggester for the entire index / cluster quickly through some API call ?
-
I know that FST is loaded into memory on first query for completion. In case memory usage goes too high, can we rely on stopping the queries to suggester to bring the memory usage down ? Assuming that ElasticSearch will remove the FST from memory. If yes, what would be the reaction time here.
Attaching my stats response here:
{
"_shards": {
"total": 10,
"successful": 10,
"failed": 0
},
"_all": {
"primaries": {
"docs": {
"count": 959842,
"deleted": 345746
},
"store": {
"size_in_bytes": 345891083
},
"indexing": {
"index_total": 2527469,
"index_time_in_millis": 941255,
"index_current": 0,
"index_failed": 0,
"delete_total": 1567625,
"delete_time_in_millis": 86087,
"delete_current": 0,
"noop_update_total": 0,
"is_throttled": false,
"throttle_time_in_millis": 0
},
"get": {
"total": 0,
"time_in_millis": 0,
"exists_total": 0,
"exists_time_in_millis": 0,
"missing_total": 0,
"missing_time_in_millis": 0,
"current": 0
},
"search": {
"open_contexts": 0,
"query_total": 5686,
"query_time_in_millis": 10233,
"query_current": 0,
"fetch_total": 3594,
"fetch_time_in_millis": 12492,
"fetch_current": 0,
"scroll_total": 1295,
"scroll_time_in_millis": 10085978,
"scroll_current": 0,
"suggest_total": 1052,
"suggest_time_in_millis": 607,
"suggest_current": 0
},
"merges": {
"current": 0,
"current_docs": 0,
"current_size_in_bytes": 0,
"total": 257,
"total_time_in_millis": 774600,
"total_docs": 11568773,
"total_size_in_bytes": 3089162173,
"total_stopped_time_in_millis": 0,
"total_throttled_time_in_millis": 2057,
"total_auto_throttle_in_bytes": 99311412
},
"refresh": {
"total": 2383,
"total_time_in_millis": 340047,
"external_total": 2072,
"external_total_time_in_millis": 325971,
"listeners": 0
},
"flush": {
"total": 77,
"periodic": 0,
"total_time_in_millis": 1597
},
"warmer": {
"current": 0,
"total": 2067,
"total_time_in_millis": 34
},
"query_cache": {
"memory_size_in_bytes": 229708,
"total_count": 16211,
"hit_count": 7229,
"miss_count": 8982,
"cache_size": 23,
"cache_count": 628,
"evictions": 605
},
"fielddata": {
"memory_size_in_bytes": 0,
"evictions": 0
},
"completion": {
"size_in_bytes": 40744741
},
"segments": {
"count": 16,
"memory_in_bytes": 40996309,
"terms_memory_in_bytes": 40834469,
"stored_fields_memory_in_bytes": 148912,
"term_vectors_memory_in_bytes": 0,
"norms_memory_in_bytes": 10752,
"points_memory_in_bytes": 0,
"doc_values_memory_in_bytes": 2176,
"index_writer_memory_in_bytes": 0,
"version_map_memory_in_bytes": 0,
"fixed_bit_set_memory_in_bytes": 0,
"max_unsafe_auto_id_timestamp": -1,
"file_sizes": {}
},
"translog": {
"operations": 0,
"size_in_bytes": 275,
"uncommitted_operations": 0,
"uncommitted_size_in_bytes": 275,
"earliest_last_modified_age": 0
},
"request_cache": {
"memory_size_in_bytes": 0,
"evictions": 0,
"hit_count": 0,
"miss_count": 0
},
"recovery": {
"current_as_source": 0,
"current_as_target": 0,
"throttle_time_in_millis": 0
}
},
}
Besides #1 and #2 also, if you have any idea to quickly decrease heap usage in an emergency scenario, please let me know.