Completion suggester high memory usage

I am trying to add autocomplete to a field in Elasticsearch. The dataset is 260 million journal articles and I want to autocomplete based off of the title of the article. However, when I get to around 100 million indexed documents, memory increases so high that the cluster cannot handle it. This is for 60gb memory nodes. I checked the size of my completion data in memory and it is around 15gb.

Is there a way to reduce the memory usage? Can I some how start the autocomplete after a few characters rather than one? Or can I implement a simple word suggest with queries rather than full autocomplete? Trying to get something implemented for this problem and will appreciate any help!

Here are the field settings which I created using kibana index templates:

display_name": {
    "type": "text",
    "fields": {
      "complete": {
        "type": "completion",
        "analyzer": "simple",
        "preserve_separators": true,
        "preserve_position_increments": true,
        "max_input_length": 50
      },
      "keyword": {
        "type": "keyword"
      }
    }
  },

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.