I am running ES 5.1.1 on Ubuntu 16.04, kernel: 4.4.0-53-generic
JVM Version: 5.1.1, Build: 5395e21/2016-12-06T12:36:15.409Z, JVM: 1.8.0_111
When used (in /etc/elasticsearch/elasticsearch.yml): network.host: "0.0.0.0"
or network.host: 0.0.0.0
Elasticsearch starts and fails after few seconds.
When network.host option is not used it starts ok, but binds to localhost.
Logs from journalctl -u elasticsearch.service:
Jan 04 13:17:39 v2-es5-eu systemd[1]: Started Elasticsearch.
Jan 04 13:17:44 v2-es5-eu systemd[1]: elasticsearch.service: Main process exited, code=exited, status=1/FAILURE
Jan 04 13:17:44 v2-es5-eu systemd[1]: elasticsearch.service: Unit entered failed state.
Jan 04 13:17:44 v2-es5-eu systemd[1]: elasticsearch.service: Failed with result 'exit-code'.
Anyone having similar issues?
EDIT:
None of _global_ or _site_ works as well. When set to _local_ it is fine.
The underlying hardware is EC2 node.
[2017-01-04T13:48:44,898][ERROR][o.e.b.Bootstrap ] [v2-es5-eu] node validation exception
bootstrap checks failed
memory locking requested for elasticsearch process but memory is not locked
It wasn't a problem when run with _local_, as it was just a Warning.
I've set the:
elasticsearch soft memlock 65536
elasticsearch hard memlock 65536
@junior_h Thanks, I've set these to 1G, as this is my expected heap size. Problem hasn't gone.
I came across this post: Memory confusion in Ubuntu 16.04
and realized that I don't need to lock the memory as I don't have swap on my node anyway.
Not sure if this might be reason for the memory_lock setting to not work, as it clearly says it's a RLIMIT_MEMLOCK issue.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.