We have a Elasticsearch cluster of four nodes, each allocated more than 32gb memory, but we found that we got too many small size segments which is about 20mb. We had tried some settings like:
But it didn't work.
What can i do to achieve a bigger size segment like 512mb?
Just the same as described in：
What type of data are you indexing? Is it immutable or are you also updating documents? Are you using your own document id or letting Elasticsearch assign it automatically?
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.