Inference process generates pytorch error

Hi,
I'm using the .multilingual-e5-small_linux-x86_64 model as part of a hybrid search query. Most of the time it works fine but it randomly fails giving the following notification:

 Inference process [.multilingual-e5-small_linux-x86_64] failed due to [[.multilingual-e5-small_linux-x86_64] pytorch_inference/2426 process stopped unexpectedly: ]. 
This is the [3] failure in 24 hours, and the process will be restarted.

And sometimes when it happens too often it is not restarted:

[.multilingual-e5-small_linux-x86_64] inference process failed after [4] starts in 24 hours, not restarting again.

Is there any way I can find more information about what causes the error?

How are you using this model within the stack (pipeline, ingestion, eland)? On-premise or cloud version?

It's hosted on Azure cloud.
The model is used when importing documents through an ingestion pipeline using Logstash:

    "inference": {
      "model_id": ".multilingual-e5-small_linux-x86_64",
      "field_map": {
        "My.Description": "text_field"
      },
      "target_field": "text_embedding"
    }

And the when calculating the embeddings for the text to be used in the knn part of the search. HTTP post to "/_ml/trained_models/.multilingual-e5-small_linux-x86_64/deployment/_infer"

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.