ELSER model loaded in ML node, but do I need it?

My questions:

  • Do ML nodes have to have a Trained Model assigned?
  • Does adding a Trained Model for other features (AI Assistant, KB functionality) cause it to automatically deploy to ML nodes?
  • Is there anyway to remove it from an existing ML node it was assigned to?

Top level issue: I have a small ML node spun up by our MSP for anomaly detection (2GB). First time I tried setting up a job I got error messages about memory not enough RAM to launch the job.

The details: When troublshooting this, I noticed that the ELSER v2 was present on the node and crashing out as docs state it needs at least a 4 GB ML node. From some research it seems to appear that a model isn’t even required for anomaly detection jobs (not sure if that is correct or not).

I did attempt to remove model as MSP suggested and get the following dialog:

Delete .elser_model_2_linux-x86_64?

.elser_model_2_linux-x86_64 has associated pipelines.

Deleting the trained model and its associated pipeline will permanently remove these resources. Any process configured to send data to the pipeline will no longer be able to do so once you delete the pipeline. Deleting only the trained model will cause failures in the pipeline that depends on the model.

Delete pipeline

  • .kibana-elastic-ai-assistant-ingest-pipeline-knowledge-base

My guess is this is perhaps related to enabling some AI stuff when we hooked up to a local vLLM that we spun up as to not use a public service. But, I’m not placing much confidence in that guess.

I tried searching around a little bit and even went down incorrect rabbit holes thanks to AI, so now I’m here bugging my fellow humans for insight.

Thanks in advance!

1 Like