I don't know what information is needed, but I've started sending syslog input into logstash (and therefore into ELK).
My input simply specifies a port, the filter contains the following:
filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{@timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
syslog_pri {
}
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}
I then added the following to syslog_pri
:
syslog_pri {
add_field => { "[@metadata][type]" => "syslog" }
add_field => { "[@metadata][beat]" => "syslog" }
Here's what the output looks like:
output {
elasticsearch {
hosts => ["localhost"]
manage_template => false
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
http_compression => true
sniffing => true
sniffing_delay => 20
}
}
Neither of those syslog_pri options seem to have helped. It doesn't have any _grokparsefailure, but it has syslog-ish fields like facility and severity, so I assume it is being picked up and being filtered, it's just that the output index is literally the string "%{[@metadata][beat]}-2019.05.28"
.
I left the date in to make it clear I'm not scrubbing it in any way- that's literally the index name.