On the use of es to achieve fuzzy matching problem?

I have a text that will contain Chinese or letters and numbers. Es how to retrieve the fastest way?

Examples are as follows

现在,“讲情不讲法”在农村吃不开、行不通了。群众自觉守法、遇事找法、解决问题靠法的意识不断增强,基层干部运用法治思维、法治方式解决问题的能力明显提升,农村法治建设步履坚实,正成为农业农村持续健康发展的强大支撑和坚固保障

I hope not to divide words so that I can use any form of matching

If I retrieve the text that is becoming a farm, I can also retrieve it.

If you use type: keywrod, then use wildcards or regular * is becoming a farmer * so the query is very slow.

So I now want to use ngram, he can help me to n * size characters all indexed, so I can achieve a fuzzy match
"Tokenizer": {
"Ngram_tokenizer": {
"Type": "ngram",
"Min_gram": 1,
"Max_gram": 6000
}

But the use of ngram in the establishment of the index when this happens, please click on the link to see my details of the error message
错误详情

Help me go up

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.