Elastic AI Assistant - Function Calls not working from separately hosted Ollama instance running in Docker

Hello,

for testing purposes I've set up an ECK on a home-server. On the same server there also runs an Ollama instance serving the model hf.co/unsloth/Mistral-Small-3.2-24B-Instruct-2506-GGUF:Q4_K_XL. Because I have no powerful GPU in this server I had to raise the actions-response timeout to 10 minutes in the Kibana config via xpack.actions.responseTimeout: 10m. This works *yay* but response-times are very sub-optimal to say so :sweat_smile:

Then I decided to host another Ollama instance simply via docker on my normal PC that has a powerful GPU and responses are super-fast BUT it seems this instance cannot do function/tool-calls and I cannot find any hints why.

First I thought it has something to do with the ingress-nginx and as I also have kube-vip on my Kubernetes cluster I also have a direct access possibility to Kibana and Elasticsearch so that I can by-pass the ingress with fancy certificates managed by cert-manager. But still I do not have any success, the "external" Ollama instance still seems to have no access to the downloaded documentation or to the different function-calls that are normally provided by the Kibana-API.

I even added curl and the ECK provided CA-certificates by building/updating the Ollama-image and started the Ollama docker-instance with --add-host home-elk-kb-http:192.168.100.194 --add-host home-elk-es-http:192.168.100.193 so it can theoretically access Elasticsearch and Kibana without SSL-verification errors (at least curl can without errors). Still no chance - somewhere in-between something must happen that function-calls are not working.

Has anyone maybe any idea? Is it some CORS-stuff? Some cookie thingy or so?

Both Ollama instances serve the same model, use nearly identical environment variables etc. only the instance on my PC uses a rocm-image (but I also tried with the same image like on Kubernetes).

Thanks for any hints in advance!

ELK-version is 9.1 btw

1 Like

oh my god - I just found the problem I think :person_facepalming:

I selected the wrong "OpenAI provider" in the drop-down for my "GPU Ollama" connector!

Instead of selecting "OpenAI" I have to use "Other (OpenAI Compatible Service)" - I always missed that before - I created the "CPU Ollama" connector via calling the Kibana-API with curl.

1 Like