I have a service deployed to ECS which is basically an nginx instance.
I want to ingest the logs using filebeat - I can do this using the aws cloudwatch input type, but it doesn't grok the message field like the nginx module does.
Is it somehow possible to push aws cloudwatch logs through the nginx pipeline?
I've tried munging the pipeline from the nginx module into the aws cloudwatch one and then forcing the pipelines to be updated, like so:
filebeat setup --pipelines --modules aws/cloudwatch
The command ran successfully, but ultimately it didn't work.
Actually, most of the data I want out of these logs I can get from a
dissect processor, but I'd be loath to miss out on the extra data that
It looks like I should be able to specify the nginx pipeline in the
output.elasticsearch part of my config...but when I try this it doesn't seem to work.
# Array of hosts to connect to.
- pipeline: "filebeat-7.12.0-awscloudwatch-nginx-access-pipeline"
# Ignore filebeat log stream
"It doesn't seem to work" as in the new fields that should be generated by the pipeline aren't being created.
There are no error fields either to indicate that something went wrong.
There are no errors in the filebeat logs (although I wasn't expecting any?)
Any idea how I can debug this?
When I test the pipeline using a document indexed from awscloudwatch using filebeat, the result looks good...so why isn't the
output.elasticsearch pipelines setting working?
Looking at this: Ingest pipeline not working for filebeat
it seems that because I'm using the aws-cloudwatch module I can't specify another pipeline in
In order to apply their solution, though, I'll need to setup my own index with index settings so that it doesn't affect other filebeat-ingested stuff.
OK, for any other poor sods who came here trying to do the same thing, here's how I got it working:
First, you need to find the
filebeat-[version]-nginx-access-pipeline and Clone it.
Name it something like:
When editing the clone, change the failure processors to remove the
Set processor and add a
Pipeline processor targeting the
filebeat-[version]-nginx-error-pipeline. I didn't put any conditions in, but you could, if you're confident with how that works.
Next, update your filebeat config to include:
Now, if you're like me, and you're working with an existing filebeat index template that you now can't edit, or your pushing stuff in from filebeats across your arch and you don't want to risk any nasty side-effects, you can configure a new index template like so:
Now, when cloudwatch logs get indexed into that index by filebeat's awscloudwatch module, they'll go through the
filebeat-[version]-awscloudwatch-nginx-access-pipeline first, when the
grok fails, it'll send it onto the
filebeat-[version]-awscloudwatch-nginx-error-pipeline and you'll get all the lovely enrichments that you always wanted.
YOU ARE WELCOME
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.