What do I need to do to access Kibana from the another machine once I deploy to Linux?
Problem:
I just provisioned a new Ubuntu Linux box on Azure and installed Elastic search + Kibana.
I followed the steps here: https://www.elastic.co/start
But used the debian packages, so I started elasticsearch and kibana as services -
There was no message when I launched 'sudo service kibana start' so I assume the service started correctly.
Now the final steps says to open a browser and go http://localhost:5601, but of course I cannot do that on a SSH session, so I tried: http://"my Linux box IP":5601 from my PC, but no luck, the request times out.
The firewall is not active I think, but I also tried to activate it and open port 5601 without luck.
sudo /usr/share/kibana/bin/kibana
log [02:14:10.150] [fatal] Error: listen EADDRNOTAVAIL 40.112.217.143:5601
at Object.exports._errnoException (util.js:1026:11)
at exports._exceptionWithHostPort (util.js:1049:20)
at Server._listen2 (net.js:1244:19)
at listen (net.js:1293:10)
at net.js:1403:9
at _combinedTickCallback (internal/process/next_tick.js:77:11)
at process._tickCallback (internal/process/next_tick.js:98:9)
FATAL { Error: listen EADDRNOTAVAIL 40.112.217.143:5601
at Object.exports._errnoException (util.js:1026:11)
at exports._exceptionWithHostPort (util.js:1049:20)
at Server._listen2 (net.js:1244:19)
at listen (net.js:1293:10)
If I change the address to 127.0.0.1, then kibana starts and I can access it locally (wget gives me access denied but it connects), but I cannot connect to it remotely.
I know this doesn't directly answer your question, but did you check out our ARM template for installing Elasticsearch with Kibana on Azure? The template is able to install an Elasticsearch cluster in a number of different topologies, complete with Kibana. Deploying from the command line with the Azure CLI (cross-platform) is as simple as
# login to azure
azure login
# switch CLI to ARM mode
azure config mode arm
# create a resource group
azure group create "<name>" "<location>"
# specify a parameters.json file with the parameters and deploy the cluster using template and parameters
azure group deployment create --template-uri https://raw.githubusercontent.com/elastic/azure-marketplace/master/src/mainTemplate.json --parameters-file parameters/password.parameters.json -g "<name>"
It might save you a lot of time in configuring all of the resources required on Azure yourself (unless of course the aim is to become more acquainted with Azure resources ).
I think the problem is that the IP is not addressable, the host only sees 10.0.0.XYZ
I think I need to configure a public IP on the Azure VM, but I am not sure how to do that, I will investigate more.
No, I did not install X-Pack, will do next time, for now reverting to a Windows instance and using it locally remoting into it, I just needed a test instance, thanks.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.