Context Suggester Input Generation: How to optimize _analyze call over large no. of documents

We are using Context suggester on our documents to support autocompletion.

However, since Context suggester does not support shingle analyzer out of the box, we're trying to generate shingles ourselves. Our custom analyzer will lowercase, stem and generate shingles.

I can't find any way to call _analyze API for a bulk of documents in one go. This can potentially slow down our ingestion process which could earlier just bulk ingest all the documents but now it needs to wait on _analyze call for every single document.

Apart from that, please suggest if we can use "analyze" code as library and analyze locally. Will it require writing a full analyzer plugin.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.