Reverse proxy behind nginx docker container not working Kibana 6.2.3


I'm trying to set up Kibana behind my nginx container, but no matter what I do, I am unable to reach it.

My Nginx config reads:

  location /kibana/ {
       auth_request    /admin/users/nginx/auth_request/;
        error_page      401 /401$request_uri;
  	    proxy_pass     http://kibana:5601;

I've got a kibana.yml that reads: ""
server.basePath: "/kibana"

but when I try to reach kibana at httsp://<>/kibana I get a "Too many redirect" error from Chrome. When I look at Kibana's logs I see a 301.

Much thanks in advance.

Can you share the URLs the Kibana is trying to redirect you to? If you use curl or something it might be easier to see. I'm guessing that you need to add a rewrite to remove /kibana from the url, because even though you've set the basePath to '/kibana' it expects that to be removed before the request is routed to Kibana. In kibana 6.3+ there is a server.rewriteBasePath which you can enable to tell kibana that it should rewrite requests to {basePath}/* internally, but you'd have to upgrade Kibana and Elasticsearch.

@spalger thanks for your guidance. I've included my configs, per your request. Meanwhile, I will also try to upgrade set server.rewriteBasePath.

My nginx config for location looks like:

    location /kibana/ {
            error_page      401 /401$request_uri;
            rewrite         ^/kibana(.*)$   $1      break;
            proxy_pass      http://kibana:5601/;

My kibana.yml looks like this:
server.basePath: "/kibana"

My goal is to have kibana behind nginx, but when I go to, I get a redirect loop to kibana/ and then back to kibana and on and on.

My understanding was the server.basePath was supposed to resolve this?

Ah, you will need to keep the trailing stash because /kibana is going to redirect to / and then to /app/kibana probably.

Thank you for your prompt reply. Which trailing slash are you referring to? Sorry my confusion.

Hmm, actually, that's not right, Kibana shouldn't be redirecting you to /kibana...


I wanted to include a curl, per your suggestion:

sh-4.2# curl -v localhost:5601

  • About to connect() to localhost port 5601 (#0)
  • Trying
  • Connected to localhost ( port 5601 (#0)
    GET / HTTP/1.1
    User-Agent: curl/7.29.0
    Host: localhost:5601
    Accept: /

< HTTP/1.1 200 OK
< kbn-name: kibana
< kbn-version: 6.3.1
< cache-control: no-cache
< content-type: text/html; charset=utf-8
< content-length: 217
< accept-ranges: bytes
< Date: Wed, 21 Nov 2018 15:55:46 GMT
< Connection: keep-alive

var hashRoute = '/app/kibana'; var defaultRoute = '/app/kibana';

hmm, so I guess, I'm still a bit confused. I upgraded to 6.3.1 and now my kibana.yml looks like this:

server.basePath: "/kibana"
server.rewriteBasePath: {basePath}/*

Additionally my nginx config, I removed the trailing slash so it now looks like this:

        location /kibana/ {
                error_page      401 /401$request_uri;
                rewrite         ^/kibana(.*)$   $1      break;
                proxy_pass      http://kibana:5601;

Currently the issue is that the last redirect does not append the server.basePath to the url. This means when I go to, I am sent to which sends me to, which is an invalid url.

My understanding is that the server.basePath would append the value assigned to it? Such that the value of the last redirect would be

Am I understanding the value of server.basePath?

Finally I wanted to reach out to @jarpy as he seemed to help with a similar issue 126732.

Thank you again

If you are seeing requests in your Kibana logs, then at least we know that Docker networking is configured correctly. I see that early on you had your Kibana container bound to the loopback interface with, but you have that cleared up now.

I'll leave you with the Kibana experts to hopefully resolve the rewrite dilemma.


Thanks for your advice, it does not appear that I'm seeing requests on the Kibana logs. The logs never change from:

{"type":"log","@timestamp":"2018-11-23T14:14:39Z","tags":["status","plugin:elasticsearch@6.3.1","info"],"pid":1,"state":"green","message":"Status changed from red to green - Ready","prevState":"red","prevMsg":"Unable to connect to Elasticsearch at http://elasticsearch:9200."}

I would greatly appreciate your advice.

Thank you

Oh OK. You had said "When I look at Kibana's logs I see a 301.". Has something changed since then?

It's probably best to re-post complete copies of relevant files like kibana.yml and whatever you are using to orchestrate the containers. Thanks.

Thanks, @jarpy! I'm using docker for orchestration similar to 126732

My kibana.yml: ""
server.basePath: "/kibana"
server.rewriteBasePath: false
elasticsearch.url: "http://elasticsearch:9200" false

My location from nginx:

location /kibana/ {
auth_request /admin/users/nginx/auth_request/;
error_page 401 /401$request_uri;
rewrite ^/kibana(.*)$ $1 break;
proxy_pass http://kibana:5601;

@spalger @jarpy don't mean to bother, please do let me know if I'm being impolite, just curious if there are any ideas or insights or needed info with regard to my issue.

Thank you

Hey, sorry to keep you waiting. The best thing I could do would be to try and replicate what you are seeing. To be honest, I still need bit more detail.

If you could share exactly how you are starting the containers, and the complete configuration for Kibana (you've posted that, thanks) and Nginx, then maybe I can reproduce it. It's much easier to figure out a problem when you can reproduce it on your own system.


Thanks very much,

docker ps:

CONTAINER ID        IMAGE                                                     COMMAND                  CREATED             STATUS              PORTS                                NAMES
ae10ac3186fc        zentral_kibana                                            "/usr/local/bin/kiba…"   22 hours ago        Up 22 hours         5601/tcp                             zentral_kibana_1_f75df5072617
f0dde9d318a7        nginx:stable                                              "nginx -g 'daemon of…"   22 hours ago        Up 22 hours         80/tcp,>443/tcp         zentral_nginx_1_6023568d470d
3e264a2fe1af        addepar/zentral:latest                                    "/zentral/docker-ent…"   22 hours ago        Up 22 hours                                              zentral_workers_1_68c6c95d417d
b4f0be82986c        addepar/zentral:latest                                    "/zentral/docker-ent…"   22 hours ago        Up 22 hours                                              zentral_web_1_e89efb247b82
c9cba2343031        postgres:10                                               "docker-entrypoint.s…"   22 hours ago        Up 22 hours         5432/tcp                             zentral_db_1_3ed065512ba1
a1850a99db69   "/usr/local/bin/dock…"   22 hours ago        Up 22 hours         9200/tcp, 9300/tcp                   zentral_elastic_1_5c13449ee3f3
ddab859401f5        rabbitmq:3                                                "docker-entrypoint.s…"   22 hours ago        Up 22 hours         4369/tcp, 5671-5672/tcp, 25672/tcp   zentral_rabbitmq_1_b47ec35b2f7e
ec0354f276ed        zentral_promsrv                                           "/bin/prometheus --c…"   22 hours ago        Up 22 hours         9090/tcp                             zentral_promsrv_1_6b2e6317fa4a


version: '2'

    image: postgres:10
      - ./conf/start/docker/postgres.env
      ES_JAVA_OPTS: -Xms512m -Xmx512m
      discovery.type: single-node
          - elasticsearch
      - elasticsearch_data:/usr/share/elasticsearch/data
    image: rabbitmq:3
    build: ./conf/start/docker/prometheus/
      - prometheus_sd:/prometheus_sd
    image: nginx:stable
      - "443:443"
      - promsrv
      - web
      - ./conf/start/docker/nginx/conf.d/:/etc/nginx/conf.d/
      - ./conf/start/docker/tls:/etc/nginx/tls
      - web_static_root:/zentral_static
      file: docker-compose.common.yml
      service: app
    command: runserver
      - db
      - elastic
      - rabbitmq
      - web_media_root:/var/zentral
      - web_static_root:/zentral_static
      file: docker-compose.common.yml
      service: app
    command: runworkers --external-hostname workers --prometheus-sd-file /prometheus_sd/workers.yml
      - db
      - elastic
      - rabbitmq
      - web_media_root:/var/zentral
      - prometheus_sd:/prometheus_sd
    build: ./conf/start/docker/kibana



Kibana Dockerfile:

ADD     kibana.yml /etc/kibana/

Kibana.yml: ""
server.basePath: "/kibana"
server.rewriteBasePath: false
elasticsearch.url: "http://elasticsearch:9200" false

nginx config:

server {
        listen 443 ssl http2;
	server_name zentral;

	ssl_prefer_server_ciphers on;

	ssl_protocols TLSv1 TLSv1.1 TLSv1.2;

	ssl_certificate /etc/nginx/tls/zentral.crt;
	ssl_certificate_key /etc/nginx/tls/zentral.key;
	ssl_dhparam /etc/nginx/tls/zentral_dhparam.pem;

    location = /favicon.ico {
		return 204;

    location /kibana/ {
        auth_request    /admin/users/nginx/auth_request/;
        error_page      401 /401$request_uri;
        rewrite         ^/kibana(.*)$    $1    break;
        proxy_pass      http://kibana:5601;

    location /prometheus/ {
        auth_request    /admin/users/nginx/auth_request/;
        error_page      401 /401$request_uri;
        proxy_pass	    http://promsrv:9090;

    location /admin/users/nginx/auth_request/ {
        proxy_pass              http://web:8000;
        proxy_pass_request_body off;
        proxy_set_header        Content-Length "";
        proxy_set_header X-Original-URI $request_uri;

    location /401 {
        rewrite ^/401(.*)$ /accounts/login/?next=$1 redirect;

    location /static/ {
        expires max;
        alias /zentral_static/;

	location / {
        proxy_pass        http://web:8000;
        proxy_set_header   Host             $host;
        proxy_set_header   X-Real-IP        $remote_addr;
        proxy_set_header   X-Forwarded-For  $proxy_add_x_forwarded_for;
        proxy_set_header   X-Url-Scheme     $scheme;
        client_max_body_size 10m;

    error_page   500  /500.html;
    error_page   503  /503.html;
    error_page   502 504 /50x.html;
    location ~ ^/50[03x].html$ {
		root /home/zentral/server/templates/;

I start it all up with a good oldocker-compose up -d` I've curtailed some nginx cipher configs to conform to the character limit.

Thank you again for your help!

@jarpy @spalger, I have an update. I am getting a 200 from kibana (with the configs shared – sorry for my mistake).

It appears that the path is not getting altered by the kibana.yml. I thought the server.basePath would append the value provided to the path, such that if I request https://foo/kibana, I'd get back, https://foo/kibana/app/kibana/etc, instead if I request https://foo/kibana, I get back https://foo/app/kibana.

Thank you for your patience with my mistakes, again any guidance you can provide, would be greatly appreciated.

This curl is from the nginx host:

# curl http://kibana:5601
<script>var hashRoute = '/app/kibana';
var defaultRoute = '/app/kibana';

var hash = window.location.hash;
if (hash.length) {
  window.location = hashRoute + hash;
} else {
  window.location = defaultRoute;

Thanks! I always ask people to share everything they can. You never know where problems will show up. In fact, I would say that people tend to share snippets that don't have problems, becuase they share the parts that they are looking at closely and have thus eliminated all the bugs. :slight_smile:

I haven't finished my reproduction yet, but I can already see something very suspicious:

ADD     kibana.yml /etc/kibana/

Our container image does not look for kibana.yml in /etc/kibana, so your settings won't be having any effect. The correct location is /usr/share/kibana/config/kibana.yml. This is the standard for Elastic products installed from tarballs (not deb or rpm packages), which is how the images are made.


I forgot to add: please don't worry about mistakes. We all make plenty of those!

Yes. That did the trick in my repro environment. You'll also want to remove false from kibana.yml since you are running the OSS-only image which does not recognise that setting. The fact that Kibana starts at all shows that it's not picking up the settings. Kinda funny. :slight_smile:

@jarpy wow. that was a silly but very bit of helpful information! Thank you very much!

You're most welcome.

Silly and infuriatingly pedantic, that's computers all right!