Z
node.name: Centos-8
node.roles: [data, master]
bootstrap.memory_lock: true
When I use bindip to Ip of server instead of 0.0.0.0 then it passes.
Error I am getting on both data nodes
Successfully connected to cluster elasticsearch (localhost) as user cert XXXXXXXXXXXXXXXXXXX
Internal Server Error
Can I get any reason for failure or success according to ES6 and ES8 documentation.
Error I am getting on both data nodes
Successfully connected to cluster elasticsearch (localhost) as user cert XXXXXXXXXXXXXXXXXXX
Internal Server Error
Here XXXXXXXXXXXXXX are certs details.
This Errorprint onscreen on both nodes. I did not get any other ERROR. So i dont haveanything to POST.
You need to get the log from the log file and share it, without it is not possible to troubleshoot.
But from the short description you provide this seems to be related to certificates and authenticaton, which are not being done by Elasticsearch in your case, but by a third-party plugin.
Any issue in Elastisearch to bind to ip address/port would cause the service to not start, I doubt you would get an Internal Server Error in this case.
I think this is expected not to work in general. See these docs, particularly (emphasis mine):
To avoid confusion, it is simplest to use a value which resolves to a single address. It is usually a mistake to use 0.0.0.0 as a publish address on hosts with more than one network interface.
If it worked for you in version 6 then that was not by design, but you may have been lucky and got away with it.
That's in addition to what folks have said above: you're using a third-party plugin that mucks around with networking stuff in ways that we do not support or even understand here, but which definitely could explain the fragments of errors you've shared so far. If you can reproduce the problem with the built-in security functionality, after fixing the 0.0.0.0 mistake, then it's more likely we can help.
Binding the transport port to multiple interfaces is probably a mistake.
This implies that something is wrong with name resolution. You're still making it incredibly hard to help by just not sharing any actual logs or error messages. Just saying "it failed" without any supporting evidence is frustrating for those of us trying our best to help you.
You're right — it seems the name resolution wasn't properly set up. After revisiting the configuration and ensuring proper FQDN resolution for the multi-NIC environment, things are now working as expected.
I agree, troubleshooting without complete logs was frustrating — apologies for that.
I'll run a few more validation tests and will follow up here with final confirmation. Thanks again for your support!
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.