I'm trying to optimize the disk usage on my Elasticsearch (version 8.8), and I want to see how much disk space was used (saved) after configuring different things, e.g. dynamic vs static mapping, default compression vs best_compression, before and after running shrink/force merge API.
I was thinking I could start with an index where data was ingested using default settings, then for each of the different configurations, I would first create an index template for the destination index, then reindex the original index to the destination index. In that way, I could see how much disk space was used in the destination index vs the original index.
Is this the right way to go?