Indexes being created as %{[@metadata][beat]}-2017.05.26 instead of "filebeat-2017.05.26"

I am having a similar issue to this thread, but the solution does not appear to be the same for me, and I have not been able to figure out how to fix it. I am actually thinking the issue may be with filebeat, and not logstash.

The only inputs I have are from 17 different servers running filebeat on generic log files, and the filebeat config for each is basically the same, the only thing that may change between each is the path. Here is an example:

filebeat.prospectors:
- input_type: log
  paths:
    - c:\Program Files\Clarity\*\Logs-Debug\*.log
  tags: ["clarity_web"]
  ignore_older: 24h
  close_inactive: 1h
  multiline.pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2}'
  multiline.negate: true
  multiline.match: after
output.logstash:
  hosts: ["logstash01:5044"]

And here is my logstash config:

input {
  beats {
    port => 5044
    codec => multiline {
      pattern => "^%{TIMESTAMP_ISO8601} "
      negate => true
      what => previous
    }
  }
}
filter {
  if [source] =~ "Clarity" {
    mutate {
      add_field => { "original_message" => "%{message}" }
    }
    grok {
      match => { "message" => "%{TIMESTAMP_ISO8601:logtimestamp}%{SPACE}%{SYSLOG5424SD:threadinfo}%{SPACE}%{LOGLEVEL:loglevel}%{SPACE}%{PROG:program}%{SPACE}%{NOTSPACE}%{SPACE}%{NOTSPACE}%{SPACE}%{GREEDYDATA:message}"}
      overwrite => ["message"]
    }
    date {
      match => ["logtimestamp","ISO8601"]
      target => "@timestamp"
    }
    mutate {
      remove_field => ["logtimestamp"]
    }
  }
    else {
      grok {
        match => { "message" => "%{TIMESTAMP_ISO8601:logtimestamp}%{SPACE}%{SYSLOG5424SD:threadinfo}%{SPACE}%{LOGLEVEL:loglevel}%{SPACE}%{PROG:program}%{SPACE}%{NOTSPACE}%{SPACE}%{NOTSPACE}%{SPACE}%{GREEDYDATA:message}"}
        overwrite => ["message"]
      }
    date {
      match => ["logtimestamp","ISO8601"]
      target => "@timestamp"
    }
      mutate {
        add_tag => ["filter_else"]
        remove_field => ["logtimestamp"]
      }
    }
  }

output {
  elasticsearch {
    hosts => "elasticsearch01:9200"
    manage_template => false
    index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}" 
    document_type => "%{[@metadata][type]}" 
  }
#  stdout { codec => rubydebug}
}

It seems that the data is being sent over from filebeat without the usual metadata (source, beat.hostname, beat.name, beat.version, etc), so my filter 'if' in logstash gets skipped because there is no source field, and I catch it in the else. I have enabled the stdout in logstash, but I don't see anything helpful there, which is what leads me to believe that filebeat may be the culprit.

Any known issues with this or suggestions on how to troubleshoot it further? Also, pardon my configs if they are not implemented in the best possible way, this is my first deployment and any suggestions would be appreciated.

Thanks in advance!

Try removing the multiline codec from the beats input in your Logstash config.

That is how I originally had it set up, and I was still seeing the issue. I'll try it again though and make sure.

This topic was automatically closed after 21 days. New replies are no longer allowed.