I'm trying to compress log data I have in a monitoring cluster and that is configured to be erased after 7 days, and I'm not sure which path should I follow between rollup jobs and down sampling in order to compress that data and have a single file for each day.
According to the docs, rollup jobs are already deprecated, and downsampling only works for time series data streams, which, my index is not. Also, the TSDS docs states For other timestamped data, such as logs or traces, use a regular data stream.
I've already created a rollup job to try it out and I think I managed to get what I want from it, which is the daily number of users logging in into a separate cluster we manage. But it still bugs me to think that is deprecated.
I don't know the possible impact of modifying the current monitoring index mapping to set the index to time series mode and the fields to mark them as dimensions. I don't even know if I have privileges to do so. (But I can certainly ask for them). Should I stick to rollup jobs even if they are deprecated? Or should I make the effort to transition to downsampling?