Implementing machine learning API in Rust using elasticsearch8.4.0-alpha.1 library

Hey folks, I am trying to implement machine learning API in Rust programming language using elasticsearch8.4.0-alpha.1 library and now I am stuck. I want to use ML model which is already imported in Elasticsearch which converts text into vector. Please help me out.

ub async fn vector_search(data: web::Json<Value>) -> Result<HttpResponse, CustomError> {
    let client = get_client().unwrap();
    let infer_response = client
    .ml().get_trained_models(elasticsearch::ml::Ml::get_trained_models())
    .send()
    .await
    .unwrap();
    let response_id_body = infer_response.json::<Value>().await.unwrap();
    Ok(HttpResponse::Ok().json(json!(response_id_body)))
    
}

I can't help you with the Rust programming sorry but you want to call the _infer API. GET Trained models returns the model configurations

Here's a blog that describes using Text Embeddings in Elasticsearch: How to deploy NLP: Text Embeddings and Vector Search | Elastic Blog

Hello @dkyle , thanks for looking into this, but I got the solution. Actually, I found two solutions and one of them worked for me.
First Solution: The following function returns the vector field for text which is passed through argument.

pub async fn get_vector(str:String) -> Value {
    let client = reqwest::Client::new();
    let res = client
        .post("http://localhost:9200/_ml/trained_models/sentence-transformers__clip-vit-b-32-multilingual-v1/_infer")
        .basic_auth(env::var("elasticsearch_username").expect("Please set username in .env"), Some(env::var("elasticsearch_password").expect("Please set password in .env")))
        .json(&json!({
            "docs": [{"text_field": str}]
        }))
        .send()
        .await
        .unwrap();
    let response_body = res.json::<Value>().await.unwrap();
    let predicted_value = json!(response_body["inference_results"][0]["predicted_value"]);
    predicted_value
}

Second Solution: I found this solution online but never worked for me.

  • Define the get_client function to create an Elasticsearch client instance. Make sure to replace http://localhost:9200 with the appropriate Elasticsearch URL:
fn get_client() -> Result<Elasticsearch, Box<dyn std::error::Error>> {
    let builder = elasticsearch::Elasticsearch::default();
    let client = builder.url("http://localhost:9200").build()?;
    Ok(client)
}
  • Implement the vector_search function:
pub async fn vector_search(data: web::Json<Value>) -> Result<HttpResponse, CustomError> {
    let client = get_client()?;
    
    // Retrieve the trained models
    let trained_models_response = client.ml().get_trained_models().send().await?;
    let trained_models = trained_models_response.json::<Value>().await?;
    
    // Choose the desired model ID
    let model_id = "your_model_id";
    
    // Perform the vector inference
    let inference_response = client
        .ml()
        .infer_vector(elasticsearch::ml::MlInferVectorParts::IndexId(model_id))
        .body(json!({
            "docs": [
                {
                    "field": "your_text_field",
                    "text": "your_text_data"
                }
            ]
        }))
        .send()
        .await?;
        
    let inferred_vectors = inference_response.json::<Value>().await?;
    
    Ok(HttpResponse::Ok().json(json!(inferred_vectors)))
}

Replace "your_model_id" with the ID of the trained model you want to use. Replace "your_text_field" with the name of the field containing the text data in your Elasticsearch index. Replace "your_text_data" with the actual text you want to convert into a vector.
Note: You need platinum version of Elasticsearch

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.