I have a 3 node ELK Cluster - with two logstash and 1 kibana server . When i start my Elk cluster it is giving an error in two nodes and fails. it is all realted to Jvm memory error. I have attached the error screenshot.
can you please let me know how to increase the jvm memory. i have added a jvm.option file , but it is not reading the xms-12g and xmx 12g at all when the cluster starts up. not sure where am i going wrong
In this screenshot it shows the node has only 1gb.. while i have allocated xms -12g and xmx- 12g in the jvm.options file . not sure why elk is taking the jvm has 1g. while the node server has 32gb available
Yes I have put the settings in the jvm.options,.d/jvm.options file . One quick question -
in the jvm.options file - just putting this settings is fine is this correct
Please don't post pictures of text, logs or code. They are difficult to read, impossible to search and replicate (if it's code), and some people may not be even able to see them
I have done the required changes, even then when i start the cluster the node fails with the jvm heap memory issue. not sure where am i going wrong. can you please help me with this
I got it resolved.. for some reasons ELK was not reading my new file - jvm.options file. once it started reading the file. where i have increased my java heap size . the issue got resolved.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.