Hi,
I use logstash 5.6.8 with the below logstash configuration to forward logs from my elasticsearch to syslog server. I have scheduled the inut plugin to read every minute. I can see same old logs are being read every minute and sent to the syslog server. How can i avoid this duplication of logs?
The elasticsearch input doesn't have any functionality for skipping already processed documents so there's no simple way of avoiding duplicates with the design you've chosen.
How do the documents end up in ES? Would it be possible to hook into the pipeline earlier on?
Logs are provided by a team in Elasticsearch from multiple sources. I do not have control over this. But i am allowed to read from the Elasticsearch. So i am trying to implement logstash to read from Elasticsearch and forward it to syslog server.
Is there nothing like the sincedb for file input which keeps a track of the last record?
Logs are provided by a team in Elasticsearch from multiple sources. I do not have control over this. But i am allowed to read from the Elasticsearch. So i am trying to implement logstash to read from Elasticsearch and forward it to syslog server.
That's a flawed architecture. Don't use ES as a message-passing mechanism.
Is there nothing like the sincedb for file input which keeps a track of the last record?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.