I have CloudWatch logs being sent to elasticsearch from functionbeat. There is a filed called "message" that contains all the information I want to perform visualizations on. I've spent an entire day trying to figure out how to use your KQL method to practice parsing this "message" field and I've failed miserably on every level.
As you'll see below I think part of the problem is everything inside this "message" field isn't being treated as JSON but instead a single string. This is really complicating matters. I'm going to need guidance and you're going to have to explain it to me like i'm 5. Perhaps the answer is to create a ingest pipeline but I've no idea what that would look like. The answer could easily be a complex KQL query too.
I'm removing junk data out of this document that isn't important indicated by the characters ...
Thank Luca ~ I take it the Grok processor might be of interest to me then? I ask cause there is a json processor as well. My confusion is that the json object that is in my cloudwatch logs is somewhere getting converted to a json string it seems. I assume this is happening in functionbeat or elasticsearch. Therefore, I'm confused if I should use the grok or json processor in my ingest pipeline. If I use a json processor then this may not work since functionbeat treats it as a json string. Do you see where my confusion is?
That worked perfect! However, I noticed that if the stringified json has several nested fields then those nested fields do not get converted to a json object. Anyways around that? Is this where a mapping schema would come in?