AI Assistant with local LLM

Hi,

I’m trying to setup an AI connector for my local LLM.
I have ngnix as reverse proxy with an API key.

$ curl -s http://x.x.x.x:8080/api/chat -H "Authorization: $OKEY" -H "Content-Type: application/json" -d '{
"model": "llama3.1:8b-instruct-q4_K_M",
"messages": [
{"role":"system","content":"You are helpful."},
{"role":"user","content":"Say hello in one short sentence."}
],
"stream": false
}'
{"model":"llama3.1:8b-instruct-q4_K_M","created_at":"2025-10-30T09:19:56.546722543Z","message":{"role":"assistant","content":"Hello!"},"done":true,"done_reason":"stop","total_duration":18342984383,"load_duration":4902469261,"prompt_eval_count":26,"prompt_eval_duration":12297094129,"eval_count":3,"eval_duration":1134860182} $

Now in Kibana.

My connector settings looks like:

And my test error:

Any idea what I have configured wrong?

Thanks for help!

It looks like an authorization error. Have you checked the key is correct and that you have access to the specified model using that key? If not you could try regenerating the key.

The key works fine with curl. The modell I use is the same as well.

curl -s -H "Authorization: Bearer sk_local_423322d2003e436d78f288ce3f3af4057ccea2e225454da5" http://x.x.x.x:8080/api/tags | jq .
{
"models": [
{
"name": "llama3.1:8b-instruct-q4_K_M",
"model": "llama3.1:8b-instruct-q4_K_M",
"modified_at": "2025-10-30T09:05:34.577289698+01:00",
"size": 4920753328,
"digest": "46e0c10c039e019119339687c3c1757cc81b9da49709a3b3924863ba87ca666e",
"details": {
"parent_model": "",
"format": "gguf",
"family": "llama",
"families": [
"llama"
],
"parameter_size": "8.0B",
"quantization_level": "Q4_K_M"
}
}
]
}

What should i put in the URL for the settings?

Please anyone? :upside_down_face:

Have you checked your NGINX logs? How did you configure it?

Hey Leonard, thanks for replay.

Ngnix config:

/var/log/nginx# cat /etc/nginx/sites-available/ollama


/var/log/nginx# cat /etc/nginx/sites-available/ollama
server {
# Change to 80/443 if desired (443 requires TLS config below)
listen 8080;
listen \[::\]:8080;

`
# Optional: restrict who can reach this (uncomment and set your LAN/CIDR)
# allow 192.168.1.0/24;
# deny all;

# Simple static Bearer-token check
set $expected_key "Bearer sk_local_423322d2003e436d78f288ce3f3af4057ccea2e225454da5";

# If header doesn't match, return 401
if ($http_authorization != $expected_key) {
    return 401;
}

# Optional: add a WWW-Authenticate header for clarity
add_header WWW-Authenticate 'Bearer realm="ollama"';

location / {
    proxy_pass http://127.0.0.1:11434;
    proxy_http_version 1.1;

    # Pass through headers
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto $scheme;

    # For streaming
    proxy_buffering off;
}


}
==> access.log <==
10.10.0.22 - - \[06/Nov/2025:15:43:46 +0100\] "POST /v1/chat HTTP/1.1" 401 188 "-" "axios/1.12.1"

Curious why urls are different

config ... /v1/chat

and when thne curl

curl ,... /api/chat

Testing around but nothing seems to work. I have no clue what URL I should use.

$ curl -s http://localhost:8080/v1/chat/completions
"Content-Type: application/json"
-H> -H "Content-Type: application/json" \

-H "Authorization: Bearer sk_local_423322d2003e436d78f288ce3f3af4057ccea2e225454da5"
-d '{
"model": "llama3.1:8b-instruct-q4_K_M",
"messages": [
{"role": "user", "content": "Hello, how are you today?"}
]
}'

Works fine. But When I use that URL in Kibana connector it doesn't work.

Assuming you are following this

The URL should be

  1. Under URL, enter the domain name specified in your Nginx configuration file, followed by /v1/chat/completions.