I have a couple of questions on elastic search segment size, count, and disk flush/write.
Scheduled Merge policy Settings
We have about 16 GB of data in a single shard and a total of 1458014 documents across 3 shards. Each shard has 100-150 segments. Which I assume is high for a 16 GB shard and this could be one of the reasons our search response time is high(~300-500ms and p99 -> 700-800ms).
I am aware that I can trigger force merge or run full indexing to reduce the number of segments. But, are there any settings I can change in the scheduled merge policy? So that segment count is reduced to say ~20-30 max? If there are such settings, what is the performance impact of the same?
Segment Size
what size segment ES creates by default? Is there a way to tweak this number?
Hard Commit Interval
Is there a setting in ES to configure the disk write interval? like what we have in Solr(Commit/Hard commit)?
Force Merge
If my shard size is 16 GB and I chose to go for force merge with max_num_segments = 16. Does that mean I get 16 segments of 1 GB each?