Hello,
We have setup logs pipeline as below
filebeat -> kafka -> logstash -> elasticsearch
- filebeat is running on many servers which will send to centralized kafka server one topic and 20 partitions.
- We have 2 logstash servers with 10 threads to read the events from kafka. Both logstash servers form a same group (group name is 'logstash').
- logstsh will write to elasticsearch servers.
When i search the logs in Kibana, we see duplicate logs. Few events are coming 2 times, 3 times, 5 times and it is random. Let us know how to fix the isse.
Logstash Input Plugin config
input {
kafka {
bootstrap_servers => "kafka-server:9092"
group_id => "logstash"
topics => ["filebeat","jmeter"]
codec => "json"
consumer_threads => 10
}
}
Logstash output config
output {
if [fields][log_type] == "jtl" or [fields][log_type] == "jmeter.log" {
elasticsearch {
hosts => ["server1", "server2", "server3", "server4"]
manage_template => false
index => "jmeter-%{+YYYY.ww}"
document_type => "log"
#document_id => "%{[@metadata][fingerprint]}"
}
} else {
elasticsearch {
hosts => ["server1", "server2", "server3", "server4"]
manage_template => false
index => "filebeat-%{+YYYY.MM.dd}"
document_type => "log"
#document_id => "%{[@metadata][fingerprint]}"
}
}
}
Log filtered from Kibana
Time _id offset message beat.hostname source
November 24th 2017, 22:27:01.660 AV_vJW6qCJvg6CObUTjK 1,183,688[37mDEBU[0m[29692031] Calling GET /version appserver1 /HOST/var/log/upstart/docker.log
November 24th 2017, 22:27:01.660 1950307853 1,183,759[37mDEBU[0m[29692033] Calling GET /info appserver1 /HOST/var/log/upstart/docker.log
November 24th 2017, 22:27:01.660 AV_vKOUJCJvg6CObVNT7 1,183,759[37mDEBU[0m[29692033] Calling GET /info appserver1 /HOST/var/log/upstart/docker.log
November 24th 2017, 22:27:01.660 AV_vK54rCJvg6CObV5pW 1,183,759[37mDEBU[0m[29692033] Calling GET /info appserver1 /HOST/var/log/upstart/docker.log
November 24th 2017, 22:27:01.660 AV_vJWG2cd_DiBBbhmf_ 1,183,759[37mDEBU[0m[29692033] Calling GET /info appserver1 /HOST/var/log/upstart/docker.log