I went thru some threads regarding this question and most of them suggest using DeleteByQuery as Rollover API seems to delete/move to other phase the indices and create the new ones based on given condition. but DeleteBy Query is not efficient and we have data updates so if we create rollover index we will not be able to update documents after they are rolled over to warm state. please suggest we have almost 1 billion documents for 3 months. Also if you could suggest what would be reomended number of shards we create for such amount of data. we created a single index without rollover and its taking forever for even simple queries.
Many of the forums suggest elasticsearch is not good at handling updates, but am not convinced and wanted to confirm here. Please note we only update data from a single place/app which reads updates from a kafka topic and update but SLA is pretty strict. Also wan't to know is there a definite SLA within wchich an update will be reflected.