Elasticsearch only accessible from localhost

I have Elasticsearch, Logstash and Kibana all running on verion 5 on CentOS 7
I'm still very new to ELK so my understanding might not be correct.

I'm trying to create an Elasticsearch cluster, so its my understanding that I need to assign each node a ip address, so these can then all be put into the config so they can share the load.

My problem is that Elasticsearch will only run on localhost, even if I change Network.host to an ip address, it cannot be accessed by browsing to it, for example 192.168.0.107:9200

Any ideas on how i can fix the problem ?

Looking at the logs could be a good thing to do.
If you don't understand them, then share them with us.

2016/11/11 10:52:30 [error] 1518#0: *279 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.1.117, server: 192.168.1.107, request: "POST /elasticsearch/tweet*/_field_stats?level=indices HTTP/1.1", upstream: "http://[::1]:5601/elasticsearch/tweet*/_field_stats?level=indices", host: "192.168.1.107", referrer: "http://192.168.1.107/app/kibana"

This is the only log that I could find of relevence found in nginx
The status of kibana shows everthing as green.
Going to 192.168.1.107:9200 just gives me connected refused which I guess is the log above.

You need to change this setting in elasticsearch config /etc/elasticsearch/elasticsearch.yml:
network.publish_host
And set it to the ip that you want to access ES on.

A bit more info, here:
https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-network.html

1 Like

network host sets network.publish_host too.

Im thinking there's something wrong with my nginx configuration, although I dont know why this would stop it all from working

Nginx?
I thought you were connecting directly to the cluster from kibana.
If you have nginix in the mix, the answer to your question may differ depending on your desired config

I have nginx running so kibana can be accessed on another machine, the only config i have for that is forwarding port 80 traffic to 5601

Then it shouldn't matter because the connection is from Kibana to elasticsearch.
Can you do a curl call to the elasticsearch server in port 9200 from the server running kibana?
At least this will tell you if your kibana machine can access elasticsearch and you can focus on kibana settings and nginx.
Another thing you may want to do first is connect to kibana directly, until it works, so you know it's not nginx messing around

I know i haven't been very clear, sorry !

At the moment they are all running on one box, once i get this elasticsearch running without localhost, I can then expand and start adding other elasticsearch nodes, knowing that they are going to be able to connect.

ah, ok, no worries :slight_smile:
Then that setting I mentioned should do the trick, if it doesn't come back here and we'll take a look

I've set both network.host and network.publish_host to 192.168.1.107 which is the ip address of the box
and reset all services just to be sure

I can still curl localhost:9200 and it works
if i curl 192.168.1.107 I still get connected refused

try network.bind_host then, but the network.hosts should have set both settings to the ip you want to use.

Any errors in the logs?
/var/logs/elasticsearch/name_of_the_cluster.log

Just to be sure, you don't have any host firewall running, right?

Still same problem :cry:
No errors in the log, kibana still says everything is ok,
There was a firewall but I opened all the ports needed 5601,80, 9200,....
I've now disabled it to try to get it working

Does kibana access the data then?

No, nothing is appearing on kibana

Are kibana and elasticsearch in the same box?

And what's the entry in your log for this one:
[INFO ][o.e.h.HttpServer ] [master] publish_address {127.0.0.1:9200}, bound_addresses {127.0.0.1:9200}

Everything is on the same box, where do i find that sorry

here :slight_smile: