If there are only few possible values for a field, lets say max 20 possible values for a field, is it a good idea to use edgeNgram tokenizer for autocompletion?

Just curious if EdgeNGram Tokenizer has any advantage over other tokenizers especially if the possible values are finite.

I assume since the values are finite, it won't consume that much disk space for ngrams regardless of the number of documents and can easily fit in memory (assuming limited number of fields in each doc). Is this true?