I've set up a prospector in filebeat that looks as follows:
paths:
- /mycompany/somedata/*
fields:
service: dataevent
env: int
appName: Myapp
So I'd like to route any item that is written to /mycompany/somdata/* to elasticsearch, s3, and stdout. So I've set up the following output rule on my logstash service:
output {
if [fields.service] == "dataevent" {
elasticsearch {
hosts => "search-mycompany-elk-cert-blahblahblah.us-east-1.es.amazonaws.com:80"
manage_template => false
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
s3 {
region => "us-east-1"
bucket => "els-mycompany-mydata"
size_file => 2048
time_file => 5
}
stdout {}
}
}
The els-mycompany-mydata s3 bucket exists. And I have installed the logstash-output-amazon_es plugin. But nothing is written to my s3 bucket or stdout.
If I remove the condition, then I see data in my s3 bucket and output goes to /var/log/logstash/stdout/logstash.log. The problem is that it is too much data. I really only want the filebeat records with a field of service matching "dataevent" to take this path.
So my question is: how can I get logstash to match on a fields value of "dataevent"?
I've tried setting both fields_under_root to both false and true and neither seems to affect the outcome.
thanks in advance.