I'm sorry to post here if the answer is finally trivial too, but I've now past countless hours reading everywhere trying to solve it.
I'm basically sending syslog message to Logstash:
input {
tcp {
host => "127.0.0.1"
port => 10514
codec => "json"
type => "rsyslog"
}
}
I'm copying logs on flat files for debug/history:
May 7 14:50:08 core postfix/smtpd[4180]: disconnect from unknown[149.56.0.30]
Logstash is configured to progress syslogs such as described in official configuration examples:
if [type] == "rsyslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{@timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
However when inspecting Kibana, the syslog message processing seem to fail:
May 7th 2017, 14:50:08.896
host: core
severity: info
@timestamp: May 7th 2017, 14:50:08.896
port: 34,668
@version: 1
message: disconnect from unknown[149.56.0.30]
type: rsyslog
facility: mail
syslog-tag: postfix/smtpd[4180]:
timestamp: May 7th 2017, 14:50:08.000
tags: _grokparsefailure
_id: AVvi9cOM40sPYnZsVv-C
_type: rsyslog
_index: logstash-2017.05.07
_score: -
_grokparsefailure tag obviously says I'm wrong somewhere...
Logs don't say much either.
Could it be default configuration I don't know about, or specific thing I missed?
I'm running Logstash 5.4.0, Elasticsearch 5.4.0, Kibana 5.4.0, on Ubuntu 16.04 x86-64.