I have the following logstash config:
input
{
udp
{
type => "syslog"
port => 5140
}
}
filter
{
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:[%{POSINT:syslog_pid}])?: %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{@timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}
output
{
if [@fields][logsource] == "foobar"
{
elasticsearch
{
hosts => "localhost:9200"
user => "elastic"
password => "XXXX"
index => "foobar-%{+YYYY.MM.dd}"
}
}
else
{
elasticsearch
{
hosts => "localhost:9200"
user => "elastic"
password => "XXXX"
index => "non-foobar-%{+YYYY.MM.dd}"
}
}
}
this is supposed to achieve the following:
server foobar send syslog messages to syslog-server-1 and syslog-server-2
servers syslog-server-1 and syslog-server-2 have syslog-ng configured to duplicate any network received syslog to my ELK host.
ELK host hosts ES, logstash and kibana
the config above is supposed to store foobar syslog in foobar index and the rest into non-foobar index.
what I can see so far is that I never got a foobar index, no matter my tries.
Where does this come from? A good practice when debugging configurations is to enable stdout output with a rubydebug codec so that you can see exactly what data the events contain. Then you can see which fields are available and how you need to design your logic accordingly.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.