input {
udp {
port => 25826
buffer_size => 1452
codec => collectd { }
type => "collectd"
}
}
output {
if [type] == "collectd" {
elasticsearch {
index => "collectd-%{+YYYY.MM.dd}"
hosts => ["127.0.0.1"]
}
}
}
When I pull up the collectd index in Kibana I see that all the documents have a _grokparsefailure tag. The logstash log file shows nothing.
I don't understand why I am seeing this, since this config doesn't use the grok plug in. There is another config file I'm using, which does use grok, but it uses a different index and the documents for it don't have the _grokparsefailure tag.
Is your other configuration file used in the same Logstash instance? If so you need to add conditionals to select which filters should apply to which events. Otherwise all outputs and all filters will apply to all events.
As @magnusbaeck said, if you start logstash like sudo /etc/init.d/logstash start,you should pay attention to the argument -f if you have multiple config files.
Usage:
/bin/logstash agent [OPTIONS]
Options:
-f, --config CONFIG_PATH Load the logstash config from a specific file
or directory. If a directory is given, all
files in that directory will be concatenated
in lexicographical order and then parsed as a
single config file. You can also specify
wildcards (globs) and any matched files will
be loaded in the order described above.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.