We have Pivotal Cloud Foundry as a platform and we are sending logs to ELK. So our setup is like we have an F5 load balancer which sends logs to 3 Logstash servers and these 3 Logstash servers send logs to 3 Elastic servers and on one Kibana server.
Problem is I'm seeing the same log repeating 3 times in one document for e.g
As you can see in syslog5424_host
field host is getting repeated 3 times also in syslog5424_pri, syslog5424_sd, syslog5424_proc, etc
I tried shutting down 2 out of 3 Logstash servers and I don't see thing issue again but as soon as I start all 3 of them I'm seeing same repeating pattern.
Logstash config:
input {
tcp {
port => 10514
type => syslog
}
udp {
port => 10514
type => syslog
}
}
filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOG5424PRI}%{NONNEGINT:syslog5424_ver} +(?:%{TIMESTAMP_ISO8601:syslog5424_ts}|-) +(?:%{HOSTNAME:syslog5424_host}|-) +(?:%{NOTSPACE:syslog5424_app}|-) +(?:%{NOTSPACE:syslog5424_proc}|-) +(?:%{WORD:syslog5424_msgid}|-) +(?:%{SYSLOG5424SD:syslog5424_sd}|-|) +%{GREEDYDATA:syslog5424_msg}" }
}
syslog_pri { }
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
if !("_grokparsefailure" in [tags]) {
mutate {
replace => [ "@source_host", "%{syslog_hostname}"
replace => [ "@message", "%{syslog_message}" ]
}
}
mutate {
remove_field => [ "syslog_hostname", "syslog_message", "syslog_timestamp" ]
}
}
}
output {
if [type] == "syslog" {
elasticsearch {
hosts => [ "10.64.20.85:9200","10.64.20.86:9200","10.64.20.87:9200" ]
index => "logstash-%{+yyyy.MM.dd}"
}
}
}
Any tip where I should look?