Avoid eventual consistency in time series event

Hi, i'm logging time series events, and some clients read from this index sequentially, they take the top X items filtered by a fixed teanant query, and the next time they poll, the continue filtering by an orderer id > previous one.
Logstash act as bridge from rabbit, batching inserts.
The data is replicated for HA on 2 replicas.

I want to avoid eventual consistency for the latest inserted items, to avoid to skip them and skip forward next time.

Which is a good practice?
Can I filter always data inserted in the latest few finutes (1-2) to avoid missing data on replicas? I can monitor my nodes to undertand the speed rate of inserts, and mantain the 1s near real time buffer.

Otherwise can I route clients to read fixed replicas to let them read always data in the way is sync, then next time they will continue read without loose data?

Thanks, Luca.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.