We have a Elasticsearch cluster of four nodes, each allocated more than 32gb memory, but we found that we got too many small size segments which is about 20mb. We had tried some settings like:
What type of data are you indexing? Is it immutable or are you also updating documents? Are you using your own document id or letting Elasticsearch assign it automatically?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.