Transform vs update vs recreating index for historical/last values indexes

On our current project we're looking for an approach where we create an index where all of our data is stored in a continuous way, creating a new document for each data change (let's call it "main index"), and another index where only "last valid info" of each element is stored (let's call it "last value index").

As we are studying ways to implement this, we are coming to three different approachs, and we wonder which one is better:

We could use a transform over our "main index" to create and maintain the "last value index". This should allow us to have both, an index with all data, and another one with just last entered document for each element present on the main index.

This seems more or less the same that extend our ETL process, where "main index" is being feeded, to also feed the "last values index", inserting new documents and updating those were more recent data is arriving. We tend to think that using transforms will be more efficient, but we wonder if this scenario, where data is being feeded through a periodical ETL process where thousands of new documents are being loaded and hundreds (or even thousands) would have to be updated on the "last values index" each time the ETL is fired, a fully ETL based approach would be more efficient.

Last but not least, our "last value index" will be used mainly to create dashboards and queries which look for data on very specific states. As updating data inside an index, as far as I know, is not optimal, we wonder if it would be better to just filter the specific data we want to query on the ETL proccess and just erase all previous info on the "last value index" before loading the "pre-queried" data.

We're just starting in this ELK/non-relational world and are learning new things on a daily basis, so maybe our question is absurd or not correctly focused, any explanation on which approach is not a good idea and why would be very appreciated.