_grokparsefailure when not using grok

I'm using file to ingest consul-template logs

  input {
    file {
      type => "consul-template"
      path => ["/var/log/consul-template.log"]
      add_field => { "name" => "consul-template" }
    }
  }

For every log message _grokparsefailure is tagged -- but I'm not using grok. What could be causing that?

The rest of the pipeline on the host with the logs:

output {
  rabbitmq {
    host => "rabbitmq.service.ops.consul"
    port => {{rabbitmq_port}}
    user => "{{elk_rabbitmq_logstash_user}}"
    password => "{{elk_rabbitmq_logstash_password}}"
    vhost => "logstash"
    exchange => "logs_json"
    exchange_type => "direct"
    durable => true
  }
}

The pipeline on the ingest node:

  input {
    rabbitmq {
      host => "localhost"
      port => {{rabbitmq_port}}
      user => "{{elk_rabbitmq_logstash_user}}"
      password => "{{elk_rabbitmq_logstash_password}}"
      vhost => "logstash"
      exchange => "logs_json"
      queue => "logs_json"
      threads => 3
      codec => json
    }
  }

  output {
      elasticsearch {
        hosts => "{{ elasticsearch_consul_service }}"
        workers => 6
        manage_template => true
        template => "{{logstash_es_templates_dir}}/es-logstash-mappings-template.json"
        template_name => "logstash-vimana"
        template_overwrite => true
      }
    }
  }

Could it be the code=>json in the rabbitmq input?

Logstash will read all configuration files in directories where it looks for configuration files. Do you have any files in /etc/logstash/conf.d that you've forgotten about? Perhaps a file with a grok filter?

Nothing that I can find. All our logstash filters are provisioned by ansible. And the play deletes any configs that are no longer in the play. I'll keep poking around though.

Hmm. Logstash dumps its configuration upon startup if you start it with --debug. Grep that output for "grok" might be useful.