Finding the search as you type match for multi value fields

I have a multi value field called tags. Let's say my doc has the following values for the tags field:

["test trim", "yellow boat", "nice car"]

If I query for "boa", I want to get back "yellow boat". However result returns all of the fields.

I am trying to use this feature for auto complete. I tried the completion suggester, but it only works with prefixes. I want it to work with infix as well.

Thanks for reaching out, @elasticfan1. Have you thought about using N-gram tokenizer or highlighting?

Yeah I've used the search_as_you_type. While it matches the record correctly, it doesn't tell me which field it matched it. Also it will return duplicates.

My use case is an autocomplete for titles and tags. It needs to return a fixed number of matches with infix matches.

Search as you type will match on the records, but it will return a list of records instead of a list of field values it matched on. This causes dupes and issues when the field value has multiple values.

Are you saying I should use highlighting to find the exact multivalue field it matched on? Is there a more elegant way?

Thanks for following up, @elasticfan1.

You're right about the limitations of search_as_you_type when working with multi-value fields. The highlighting approach is one way to identify which specific field values matched, but there are more elegant solutions.

Would setting up an index with edge n-grams work for you?

Wouldn't I get the same issue with the edge n-grams? I thought search_as_you_type just creates those ngram fields for you automatically?

Are you saying I can use a different analyzer with the completion suggester? If so, what should the settings be?

Record 1 :
{
title : hello there
tags : ['yellow boat', 'blue hair']
}

Record 2 :
{
title : hello there
tags : ['hot peppers', 'green frog']
}

Thanks, @elasticfan1. I was thinking of creating an index similar to this:

PUT /autocomplete_index
{
  "settings": {
    "analysis": {
      "tokenizer": {
        "ngram_tokenizer": {
          "type": "ngram",
          "min_gram": 2,
          "max_gram": 20,
          "token_chars": ["letter", "digit", "whitespace"]
        }
      },
      "analyzer": {
        "custom_ngram": {
          "type": "custom",
          "tokenizer": "ngram_tokenizer",
          "filter": ["lowercase"]
        }
      }
    }
  },
  "mappings": {
    "properties": {
      "title": {
        "type": "text"
      },
      "tags": {
        "type": "text",
        "analyzer": "custom_ngram",   
        "search_analyzer": "standard"
      }
    }
  }
}

Let me know if something like this would work for your needs.

Is this index setting the same as using a search-as-you-type property type?

Sorry for the delay, @elasticfan1. I was traveling for work the past few days and just returned today. I don't believe it's the same, but I'm interested in seeing how your index is currently set up.