How to ignore old logs

Hi All,

I have scenario like below.

Gatewaylogs --> Logstash --> Kafka --> Logstash -->Elasticsearch --> Kibana

I am able to push the logs my machine to Kibana.
Now the issue is whenever i am doing some changes in configuration. I am running every service to index the logs in Kibana. The same time old logs are again appending into new run.

How to ignore the logs from source server once it reaches to Kibana?

Regards
Raja

Why are old logs being sent in the first place? What is this "Gatewaylogs" component?

Gatewaylogs are webservice logs where those are in server so i mentioned Gatewaylogs.

Old logs means-- When first time i ran the logstash pipeline the log1 will be pushed to Kibana. After adding one more log path in configuration and run one more time its taking first run log1 and second run log2.

So now i am seeing log1 is 2 times and log2 is one time.

Yes, but why are the logs even read a second time? A standard configuration of Logstash won't read the same file twice. What does your configuration look like?

Here i am getting the logs from kafka.

input
{
kafka
{
bootstrap_servers => "x.x.x.x:9092"
topics =>["logstash"]
group_id => "test-consumer-group"
consumer_threads => 1
codec => "json"
type => "%{type}"
}
}
output
{
elasticsearch
{
hosts => ["x.x.x.x:9200"]
}
stdout {codec => rubydebug }
}

I'm sure the kafka input records the current position so it doesn't process everything from the beginning each time Logstash is run, but I haven't used it myself so I can't really help out.

Okay Thank you Magnus.

AS i told you initially my architecture is not that way good but i have to make it by using that architecture.

I am shipping the logs by using logstash then kafka just processing the logs to again other logstash then finally it will be in kibana thru elasticsearch.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.