The default works fine, however it's only binding to localhost. I want to be able to bind elasticsearch to a non-loopback IP. Docs say this is possible, but setting 0.0.0.0 or the actual adapater IP results in the following:
[2016-10-27T14:16:13,023][INFO ][o.e.b.BootstrapCheck ] [ssosearch01] bound or publishing to a non-loopback or non-link-local address, enforcing bootstrap checks
[2016-10-27T14:16:13,024][ERROR][o.e.b.Bootstrap ] [ssosearch01] node validation exception
bootstrap checks failed
The only other thing I've set it to that works is ::1, but that doesn't help either. I need a remote apache server to be able to proxy to it.
What I just did that worked was to change network.host to http.host. The docs says the setting is network.host and that's how it's set up for the other 2 nodes I installed.
This is the full block from the log. It didn't generate any java errors. This is using network.host = "0.0.0.0" in the elasticsearch.yml file. Note that I blanked out my local IP address, but it did pick it up correctly.
Okay, that did work although I could have sworn I had this issue before I made any jvm changes.
Still, this isn't a "best practices" issue if the service fails to start, it's a requirement that they match. The jvm.options file says "you should" set to the same value.
Also kinda interesting that I can change network.host to http.host and it ignores this requirement.
I was able to fix the error using below properties but i am still facing issue clustering 2 nodes.
on node 1:
http.bind_host: my-elasticnode-01
http.publish_host: my-elasticnode-01
on node 2:
http.bind_host: my-elasticnode-02
http.publish_host: my-elasticnode-02
@tarun_kayasth I have the same issue.. Still got no clue at all. Tried several different settings. Did you get any luck on making it work ?
If you can share the correct settings, that would be of great help..
Yep, actually in my original case, it was due to the bootstrap check failure. Even though the config has separate entries for initial and max heap size, it actually requires you to set those to be exactly the same, otherwise the check fails and the service doesn't start.
I think I got it to skip past the error by using a different config directive, but this heap sizing mismatch was actually the problem.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.