Hi,
We have two logstash pods which are reading the data from elasticsearch from one index for last 24 hr data and then sending data to Kafka server.
We can see two different offset are created for similar log message.
Is there any way to avoid the duplication from kafka?
Could someone please help us in resolving the issue?