Kibana docker image doesn't connect to elasticsearch image

I was able to run elasticsearch docker image, and login and/or change default passwords.

This is the run command from the docs:
docker run -p 9200:9200 -e "" -e "”

Kibana docker image doesn't connect with defaults (no options), and any env vars I try to set in the command line don't seem to be used, as the logs show: Unable to connect to Elasticsearch at http://elasticsearch:9200 instead of my localhost:9200

Please help with the kibana docker run command to start kibana and connect to localhost:9200

Can you try providing a custom kibana.yml file like it's detailed here?

Only line that you need in it should be:
elasticsearch.url: "http://localhost:9200"

Thanks for the helpful reply Marius!

I can try that, and since I plan to deploy on AWS it requires several steps that I would prefer to avoid.

For example, I would have to create the file, host it somewhere (github), then when I create the host that I'm going to run docker on, I'd have to copy it to the host.

I suppose I could eliminate the storage and create the file with a command when I create the EC2, but still this requires several steps I'd like to avoid.

I think it's reasonable that Elastic provide the command to run Kibana and connect to the Elasticsearch cluster that's running with the parameters they provided.

Hi. This one is all about Docker Networking.

There are some steps involved in getting Docker to establish network links and name resolution between containers. It's one of the reasons I tend to recommend getting started with Docker Compose, since it takes care of some of these details.

The Kibana image tries, by default, to connect to a host/container called elasticsearch. However, the Docker operator must arrange for that name to make sense, and for there to be a network connection between the two containers. The latest, and probably simplest technique that Docker provides for this is "user-defined networks".

Here is a minimal, working example of using user-defined networks with docker run:

docker network create elastic
docker run --network=elastic --name=elasticsearch
docker run --network=elastic -p 5601:5601

Here, we create a user-defined network, ensure that both containers are attached to it, and crucially, make sure that the Elasticsearch container is named elasticsearch, just as Kibana expects. Docker will then ensure that the hostname elasticsearch resolves to the correct IP address for any container on that network.

If we don't do any of that, then Docker places the containers on the "default bridge network". From their documentation:

Docker does not support automatic service discovery on the default bridge network. If you want to communicate with container names in this default bridge network, you must connect the containers[...]

Also note that trying to refer to the Elasticsearch container as localhost won't work either. Every container thinks that it is, itself, localhost. When you are inside the Kibana container, localhost actually is the Kibana container.

Hopefully that helps to get you started. I highly recommend the Docker Networking
for more details and fancy tricks.

Hi Jarpy!

Thanks! This is what made the difference for me:

Now that I realized the localhost issue, I changed my run command as follows and it worked.
> docker run -p 5601:5601 -e ELASTICSEARCH_URL=http://my_hostname:9200

Yes, I'll eventually start on docker networking (or swarm, or something). For now I just want to treat the containerized services as local services.

Great! Glad I could help.

To make containers really "local-ish", you can also try --net=host, which will bind them directly to the physical NIC in the host machine. That will make localhost mean the same thing everywhere, and remove the need to map ports with -p.


This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.