Using custom pytorch model in ES8

I would like to understand what it takes to deploy/use a custom pytorch model in elasticsearch 8. The desired usage would be to get text embedding for the input text from a custom SentenceTransformer model, using an elasticsearch api. Will probably deploy the model using eland.
Is this doable? If it is, would this require the Platinum subscription for Elastic Cloud? I am currently trying this locally using a self managed version of ES8.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.