I am having an issue with logs ingesting into logs-aws.cloudfront_logs-* from a specific account, using Elastic Serverless Forwarder.
In one specific account and this account only, I am receiving the following error in Elasticsearch for events: Provided Grok expressions do not match field value.
When I take a raw log entry from the CloudWatch log group and run it through the POST _ingest/pipeline/logs-aws.cloudfront_logs-2.11.3/_simulate API, it parses as expected. When I copy the log entry from the error.message field in Kibana and run it through the same API, it also parses as expected. This only happens for events being sent from Elastic Serverless Forwarder for this log group in this account. I have verified that the log group is being ingested by Elastic Serverless Forwarder and that the logs are being sent to the correct index, and I have also verified that this is NOT occurring for any other correctly-configured log groups in this account OR for cloudFront logs in other accounts.
This is my configuration for the log group in question:
We see this sometimes when a source, for some reason, does not match...
The only way anyone can help if you provide a sample raw message so
Most Likely, you are going to need to add a new pattern to the pipeline...
Confused a bit
Is this on a message that has the error message "Grok expressions do not match"?
I am confused ... are you saying that on a message that shows Grok Error then you run it again it does not? (that can happen for some very specific reasons .. long detail there, but unlikely)
Can you share this??
Are you setting preserve event original? You should do that and then use that where the Grok fails... can help with debugging..
Oh this is from serverless forwarder... hmm not sure if there is a preserve event or not... we can edit the pipeline and take out the remove if needed...
I'm getting both error messages, in fact. But only when I send CloudFront logs from THIS ACCOUNT ONLY to Elastic Cloud. No other account shows this issue.
Additionally:
If I copy the raw event from CloudWatch and do a simulate to run it through that ingest pipeline, it processes the event as expected with no errors.
If I pull the original event out of the error.message field and do the same thing, I also get no errors. I haven't attempted to preserve the original event yet to test that.
As far as the preserve event goes, unless this integration is different than every other one that I've looked at, all that switch does is add a preserve-original-event tag to the event, which prevents the remove in the pipeline from being triggered. I use this same method in a number of custom pipelines I've written. Additionally, it's a grok failure, not a failure to delete a field, so it doesn't seem likely that this is the issue.
I'll see what I can do about pulling some raw events. There's a possibility that this MAY be related to another issue we're seeing in that account.
The point on preserving was to make sure the event.original is preserved
If it is, fine, that is solved.....
I hear ya... so you're gonna have to figure out what is going on... Something is different.
Does EVERY message from that account fail?
So what I would do is ... something along the following
Simulate does not ALWAYS expose every issue...
I would take the raw cloudfront message that seems to fail and actually post it as a documents
Something like these to see what you see... .
The first one should create a new data stream... see what you get
The second will not use the template but will call the data stream.
POST logs-aws.cloudfront-mytestnamepace/_doc
{
"message" : "cloudfront raw message"
}
POST my-generic-index/_doc?pipeline=logs-aws.cloudfront_logs-2.11.3
{
"message" : "cloudfront raw message"
}
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.