Hi there,
I changed my setting from filebeat -> logstash to filebeat -> redis -> multiple logstash.
Since we are still in migration from 5.1.2 to 6.2.3 we are running both stacks in parallel.
No we noticed that we have different document counts on both stacks.
We are not giving an explicit document key, so logstash is generating a uuid for each document. So it would be possible to index the same data multiple times without errors.
Is it save to have a logstash pipline which is running with multiple workers on a redis key?
And is it save to to run this pipeline in multiple logstash instances?
Or can it happen that different logstash instances are getting the same data from redis and so they produce duplicates against ES?
Currently I am trying to find out, if I have duplicates already in ES or not. But I would like to have your knowledge if I was to naive to do as mentioned above or not.
Thanks, Andreas