I am trying to break apart my current install of ELK that lives on one machine to an individual machine for Logstash, ElasticSearch, and Kibana. It seems as if connections to ES are being blocked as neither my Logstash or Kibana machine can connect to ES. I'm using IP and the default port (9200). I've set "network.host" to the IP of the ES node in the elasticsearch.yml file. If I understand correctly this is how you configure ES to be accessible on the address set in "network.host" to external machines, yet this doesn't seem to work. I'm not running any firewalls and if I put Kibana or Logstash on the same machine with ES installled it works fine. Any help would be greatly appreciated, thanks!
I looked at these logs before posting and found it odd that it didn't seem to reference the network.host config I changed, instead it just looks like its publishing on localhost. That being said, I don't really know how to interpret these logs. Thank you for your help!
I'm getting errors in the logs now after changing that. I'll need to take another look at this with fresh eyes in the morning. I might just rebuild the machine from scratch. Thanks again for your help! I'll report back here if I find the solution or need any additional assistance.
Alright so I'm still getting connection refused when trying to curl but the logs at least look like its publishing the hostname. I tried curling 9200 and 9300
An ifconfig on that box returns the same IP you get pinging the hostname from another machine. An nslookup on that IP returns the expected hostname. pinging itself in an SSH session returns the loopback IP, but thats to be expected. I'm not seeing any DNS wonkyness going on here.
I suspect you are running into the JVM DNS caching issue. Such as sometime in the past, that hostname got mapped to localhost and now it's stuck to always resolving to that for the JVM.
So I rebuilt the entire machine since there was more manual touch at this point than I like. I've confirmed the config is identical to the one I posted above and this was the original config so localhost should not be cached. I can now curl the hostname:9200 (not localhost:9200) from itself but not from an external machine. I'm not running IP tables and I've gone ahead and disabled ufw (I'm on Ubuntu) and I'm still getting connection refused. Here's the log when I restart the service if it helps.
I tried the IP before I switched to using the hostname and was getting the same results. Is there something wrong with my config? Why is it still using localhost? And if thats the case why am I able to curl it by hostname from itself but not localhost anymore?
I don't use vSphere so won't be much help here. My guess is that its resolver stack is doing some funny remapping internally, such that when you are in the VM, its hostname will resolve correctly, but using direct localhost address won't.
Best to ask this question in a VMWARE forum.
How about using 0.0.0.0 IP address? Does that not work? If it does not, I am out of ideas
Hmm, I doubt this has anything to do with vSphere. Our application stack has all sorts of load balancing and endpoints all over the place and I have never had issues like this. I'll try 0.0.0.0 but how is that going to make it externally available?
And I apologize if that came across a bit rude. That was not my intention at all and I really appreciate the help. Just a bit frustrated that I'm still stuck on this
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.