Inference Endpoint Conflict - Multi Agent with LangGraph lab

I’m trying to follow the recent lab for Multi agent with Langgraph. I’m having many problems but the one I’m stuck on currently is an apparent conflict between the inference endpoint which seems to be created both when deploying the model to the instance and through the setup script. There are name mismatches and I’m having difficulty cleaning it up. Claude is helping but I’m still struggling -

The issue is clear now - there's a mismatch between the inference endpoint ID and the model ID. The inference endpoint is

looking for a model called "elser-incident-analysis" but the actual model is called ".elser_model_2_linux-x86_64". This is

a configuration problem.

Run the DELETE command in Kibana Dev Tools as originally suggested:

DELETE _inference/elser-incident-analysis

After running that command, wait 30-60 seconds for the deletion to propagate through the cluster, then run the reset

script again:

python3 reset_all.py

The reset script has the logic to detect and use the correct ELSER model name, but it keeps encountering rate limits and

the endpoint persistence issue when trying to recreate it.

Deleting the endpoint doesn’t seem to work - :record_button: The endpoint is still corrupted. Now you need to run the DELETE command in Kibana Dev Tools. There's clearly a

caching/persistence issue in the Elasticsearch cluster where the inference endpoint exists but isn't properly connected to

the model.

The issue appears to be that the endpoint gets recreated with the broken configuration faster than we can create a proper

one. The Kibana manual deletion should clear this persistent cache issue.