AI Assistant with local LLM

Hi,

I’m trying to setup an AI connector for my local LLM.
I have ngnix as reverse proxy with an API key.

$ curl -s http://x.x.x.x:8080/api/chat -H "Authorization: $OKEY" -H "Content-Type: application/json" -d '{
"model": "llama3.1:8b-instruct-q4_K_M",
"messages": [
{"role":"system","content":"You are helpful."},
{"role":"user","content":"Say hello in one short sentence."}
],
"stream": false
}'
{"model":"llama3.1:8b-instruct-q4_K_M","created_at":"2025-10-30T09:19:56.546722543Z","message":{"role":"assistant","content":"Hello!"},"done":true,"done_reason":"stop","total_duration":18342984383,"load_duration":4902469261,"prompt_eval_count":26,"prompt_eval_duration":12297094129,"eval_count":3,"eval_duration":1134860182} $

Now in Kibana.

My connector settings looks like:

And my test error:

Any idea what I have configured wrong?

Thanks for help!

It looks like an authorization error. Have you checked the key is correct and that you have access to the specified model using that key? If not you could try regenerating the key.

The key works fine with curl. The modell I use is the same as well.

curl -s -H "Authorization: Bearer sk_local_423322d2003e436d78f288ce3f3af4057ccea2e225454da5" http://x.x.x.x:8080/api/tags | jq .
{
"models": [
{
"name": "llama3.1:8b-instruct-q4_K_M",
"model": "llama3.1:8b-instruct-q4_K_M",
"modified_at": "2025-10-30T09:05:34.577289698+01:00",
"size": 4920753328,
"digest": "46e0c10c039e019119339687c3c1757cc81b9da49709a3b3924863ba87ca666e",
"details": {
"parent_model": "",
"format": "gguf",
"family": "llama",
"families": [
"llama"
],
"parameter_size": "8.0B",
"quantization_level": "Q4_K_M"
}
}
]
}

What should i put in the URL for the settings?