Filter + if results in “can't convert Array into String” error

We have beaver shipping logs to logstash. Two log sources are nginx error logs and haproxy logs. beaver is adding a tag for the log type and we'd like to parse the log according to type. We wrote the following configuration file but when logstash parses it we get the error: can't convert Array into String (logstash -t -f logstash.conf says the configuration is OK).

Any ideas how to fix this?

Here's logstash.conf

input {
    udp {
        port => 25826
        buffer_size => 2048
        codec => json
    }
}

filter {
    if "nginx-error" in [tags] {
        grok {
            match => {
                # 2015/12/24 14:27:38 [error] 8#0: *43449 upstream timed ...
                "message" => "%{DATESTAMP:timestamp} \[%{DATA}\] %{GREEDYDATA:message}"
            }
            overwrite => [ "message" ]
            add_field => {
                "levelname" => "ERROR"
                "levelno" => 20
            }
        }
    }

  if "haproxy-log" in [tags] {
        grok {
            match => {
                # [WARNING] 005/130716 (9) : Server app/app1 is ...
                "message" => "\[%{DATA:levelname}\] %{GREEDYDATA:message}"
                overwrite => [ "message" ]
                add_field => {
                    "levelname" => "%{levelname}"
                    "orig_levelname" => "%{levelname}"
                }
            }
        }
        mutate {
            gsub => [
                # Change ALERT to ERROR for easy query
                "levelname", "ALERT", "ERROR"
            ]
        }   
     }
}

output {
    stdout {
        codec => rubydebug
    }
}

Is "can't convert Array into String (logstash -t -f logstash.conf" the full error message? What does the event that causes that error look like?

I'm running logstash via docker. The output when running with -v is big, you can see it at https://gist.github.com/tebeka/806c36fa5f62e2f8b366.

The error happens after one message is sent via UDP. The message is:

{"tags": ["haproxy-log"], "@version": 1, "@timestamp": "2016-01-10T09:27:24.650Z", "argos_env": "dev", "host": "7e700d0b8c50", "file": "/var/log/haproxy/haproxy.log", "message": "[WARNING] 005/130716 (9) : config : missing timeouts for backend 'app'.\n   | While not properly invalid, you will certainly encounter various problems\n   | with such a configuration. To fix this, please ensure that all following", "type": "file"}

After the error message the docker container exits.

Hmm. Nothing obviously wrong as far as I can tell. I'd try commenting out parts of the documentation to narrow down what causes this.

Hello tabeka, the error message is indeed not clear enough but your case was easy to reproduce.
In fact you have a typo in your grok config for haproxy-log.
The overwrite and add_field are inside the match config but they should be outside as you did in the nginx config,
I spotted it thanks to the correct indentation of your file.
So when grok would try to understand the hash in add_field => {...} as a matching rule it would fail.

By the way, in your haproxy you are setting twice the levelname field, one in the match, one in the add_field, it seems not necessary.

Thanks @wiibaa!

Happy for you!
I registered https://github.com/logstash-plugins/logstash-filter-grok/issues/70 to see if the error message can be improved