Elastic auto-complete vs normal querying

Hi,
I am maintaining a separate index for auto-complete suggestions,
while using type "completion" of Elasticsearch I faced certain limitations like it only worked for prefix, I tried tokenizing input by n-gram analyzer, but I think it didn't support that. So, I started storing sequential tokens in "input" property and used a different field with entire text to return the entire text to user.
Now here I had to manually tokenize the text. I'm thinking of an alternate approach where I will not use type completion and use type text tokenizing it with some tokenizer like n-gram or edge n-gram and then querying it normally by search/match query. Given that this index will only have 3-4 fields, one being the text and other to filter the documents category/tenant wise.
What will be a better approach considering this index will scale to large number of documents. what are the advantages of using type "completion". I'm aiming for better performance but in type completion tokenizing the text is where I got stuck.