Hi,
We built the pipeline of Filebeat+Logstash+Elasticserach+Kibana in a Kafka cluster. But when there is an issue happening in the cluster, same logs related to this issue keep generating before the issue is resolved. This takes a lot of space in our Elasticsearch.
Is there any way to filter the duplicate logs generated continuously in Logstash?
Thanks!