I have a logstash pipeline designed to break container logs coming from coreos into separate indexes, but it's not quite working:
My pipeline looks like this:
input {
beats {
port => 5044
}
}
filter {
grok {
match => { "source" => "%{GREEDYDATA}/%{CONTAINERAPP:prefix}*" }
patterns_dir => ["/patterns"]
}
......
translate {
add_tag => [ "monitoring" ]
field => "prefix"
regex => true
exact => true
dictionary_path => "/dict/monitoring.yml"
destination => "container"
}
.......
[Several more blocks like this adding tags to different groups of "prefix" fields]
.......
}
output {
elasticsearch {
hosts => ["elasticsearch:9200"]
manage_template => false
index => "logtest-%{[@metadata][container]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
}
I'm trying to use the syntax suggested here to get logstash to write the events to an index based on the name of the container. However, what I'm ending up with are indexes with the literal string, rather than the contents of the field "container", which is what I'm after.
$ curl -v elasticsearch:9200/_cat/indices?v |grep logtest
.....
green open logtest-%{[@metadata][container]}-2017.05.02 unVVBghgRra974trYtDPJA 5 1 9168 0 18.9mb 9.5mb
This doesn't happen with certain other fields, for instance, up until now I have had the "type" field in the pipeline config:
.......
index => "logtest-%{[@metadata][type]}-%{+YYYY.MM.dd}"
.......
and have ended up with indices like:
.......
green open logtest-log-2017.04.06 EzNg2CMKSx-mMMpbDVF-9g 5 1 116131362 0 202.1gb 101gb
.......
Thanks in advance.