_grokparsefailure with collectd

I have a simple config file:

input {
  udp {
	  port => 25826
    buffer_size => 1452
    codec => collectd { }
    type => "collectd"
  }
}
output {
	if [type] == "collectd" {
	  elasticsearch {
		  index => "collectd-%{+YYYY.MM.dd}"
	    hosts => ["127.0.0.1"]
	  }
  }
}

When I pull up the collectd index in Kibana I see that all the documents have a _grokparsefailure tag. The logstash log file shows nothing.

I don't understand why I am seeing this, since this config doesn't use the grok plug in. There is another config file I'm using, which does use grok, but it uses a different index and the documents for it don't have the _grokparsefailure tag.

Is your other configuration file used in the same Logstash instance? If so you need to add conditionals to select which filters should apply to which events. Otherwise all outputs and all filters will apply to all events.

As @magnusbaeck said, if you start logstash like sudo /etc/init.d/logstash start,you should pay attention to the argument -f if you have multiple config files.

Usage:
    /bin/logstash agent [OPTIONS]

Options:
    -f, --config CONFIG_PATH      Load the logstash config from a specific file
                                  or directory.  If a directory is given, all
                                  files in that directory will be concatenated
                                  in lexicographical order and then parsed as a
                                  single config file. You can also specify
                                  wildcards (globs) and any matched files will
                                  be loaded in the order described above.

@magnusbaeck was right. I added an if around the grok in my other config and that's taken care of the problem.

I'm on a yum installation running CentOS 6, so I use "service logstash start". It's set up to use whatever's in /etc/logstash/conf.d/. for configs.

Thanks for the suggestions.