I am able to push the logs my machine to Kibana.
Now the issue is whenever i am doing some changes in configuration. I am running every service to index the logs in Kibana. The same time old logs are again appending into new run.
How to ignore the logs from source server once it reaches to Kibana?
Gatewaylogs are webservice logs where those are in server so i mentioned Gatewaylogs.
Old logs means-- When first time i ran the logstash pipeline the log1 will be pushed to Kibana. After adding one more log path in configuration and run one more time its taking first run log1 and second run log2.
So now i am seeing log1 is 2 times and log2 is one time.
Yes, but why are the logs even read a second time? A standard configuration of Logstash won't read the same file twice. What does your configuration look like?
I'm sure the kafka input records the current position so it doesn't process everything from the beginning each time Logstash is run, but I haven't used it myself so I can't really help out.
AS i told you initially my architecture is not that way good but i have to make it by using that architecture.
I am shipping the logs by using logstash then kafka just processing the logs to again other logstash then finally it will be in kibana thru elasticsearch.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.