I assume this is a long-debated topic but I was kind of puzzled by this recommendation:
Modifying advanced settings is generally not recommended and could negatively impact performance and stability. Using the Elasticsearch-provided defaults is recommended in most circumstances.
Does this mean for example, that, when using pretty large nodes (in the likes of 64G RAM), it is still recommended to leave the Xms / Xmx values to their defaults? Doesn't having so low Xm* values in a context of such a large resident memory limit the potential of the stack?
Adding to Mark's answer that in the latest version
Elasticsearch automatically sets the JVM heap size based on a node’s roles and total memory. Using the default sizing is recommended for most production environments.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.