I am having trouble using sprintf to reference the event fields in the elasticsearch output plugin and I'm not sure why. Below is the event received from Filebeat and sent to Elasticsearch after filtering is complete:
{
"beat" => {
"hostname" => "ca86fed16953",
"name" => "ca86fed16953",
"version" => "6.5.1"
},
"@timestamp" => 2018-12-02T05:13:21.879Z,
"host" => {
"name" => "ca86fed16953"
},
"tags" => [
[0] "beats_input_codec_plain_applied",
[1] "_grokparsefailure"
],
"fields" => {
"env" => "DEV"
},
"source" => "/usr/share/filebeat/dockerlogs/logstash_DEV.log",
"@version" => "1",
"prospector" => {
"type" => "log"
},
"bgp_id" => "42313900",
"message" => "{<some message here>}",
"offset" => 1440990627,
"input" => {
"type" => "log"
},
"docker" => {
"container" => {
"id" => "logstash_DEV.log"
}
}
}
I am trying to index the files this based on filebeat's environment. Here is my config file:
input {
http { }
beats {
port => 5044
}
}
filter {
grok {
patterns_dir => ["/usr/share/logstash/pipeline/patterns"]
break_on_match => false
match => { "message" => ["%{RUBY_LOGGER}"]
}
}
}
output {
elasticsearch {
hosts => ["elasticsearch:9200"]
index => "%{[fields][env]}-%{+yyyy.MM.dd}"
}
stdout { codec => rubydebug }
}
I would think the referenced event fields would have already been populated by the time it reaches the elasticsearch output plugin. However, on the kibana end, it doesnt not register the formatted index. Instead, its since like this:
What have I done wrong?