I have more than 1 billion documents in one index with 1 shard and 2 replica. My use case is to update elastic document frequently, so i have more than six hundred fifty million deleted documents in the index which is degrades the search performance.
We have 3 Master nodes, 3 Coordinate nodes, 3 Data nodes, and 2 Elastic load balancer's
No of cores in Master nodes: 4 per node
No of cores in Data nodes: 8 per node
No of cores in Coordinate nodes: 4 per node
No of cores in ES load balancers: 2 per node
Datastore Lun, provisioned storage is 414GB
Heap memory for Data Nodes: 30Gb
Heap Memory for Master Nodes: 4 GB
Heap Memory for Coordinate Nodes: 4 GB
Swap Memory for Data Nodes: 4 GB
Swap Memory for Master Nodes: 4 GB
Swap Memory for Coordinate Nodes: 4GB
pri.store.size - 187.9gb
store.size - 563.9gb
Segments created in primary shard,
Segments created in Secondary shard (Replica)
- Is it recommended to run force merge on this Index ? If yes how long will it take ? Since there is no way to monitor force merge i need to max time it will take.
- Do i need to stop writes to the index during force merge ?
- How many max segments can i mention during force merge ? Since more number of segments in a shard will also reduce performance.
- Will search get impact during force merge ? Do i need to stop search as well ?
- Do i need to increase the shard/ Replica to improve search performance ?
Thank in Advance !!!!