How to build a grok filter for only part of the log line/parsing data multiple times for different fields

I apologize if this is something not allowed, I am new to Kibana and having issues parsing logs. I want to parse the log below but I am unsure exactly how to go about it.

2020-03-31T13:35:31.83186256Z stdout F 31-Mar-2020 13:35:31.831 INFO [s-d-Worker-0] | | [ae9db805-2a84-4431-9fba-6f3f83ea2178] | The header for the request is ,ApplHGFDQ209HG7F,111138,111138,f47dfe16-0043-43a1-8c2b-649dcda436d8,1312984,H8n0T....,eyJpI....,M0010:S08:3114049-f47dfe16-0043-43a1-8c2b-649dcda436d8

I have been looking at documentation on parsing logs, but I have a couple of questions. The only thing I care about is the comma separated values at the end of the log line. I want to pull out the following fields

DeviceID: ApplHGFDQ209HG7F
DeviceRequest: 111138
RequestID: ApplHGFDQ209HG7F,111138
GUID: f47dfe16-0043-43a1-8c2b-649dcda436d8
PolicyID: 1312984
AuthToken1: H8n0T....
AuthToken2: eyJpI....
PairID: M0010:S08:3114049-f47dfe16-0043-43a1-8c2b-649dcda436d8
BillingID: 3114049

My Questions are

  1. Can I only parse out the fields I care about? (Basically ignore the first half of it.)
  2. Can I parse the same data multiple times to get different fields?
  3. I am building this in the kibana grok debugger, When I am done how do I save it? I want all logs that have this info in it to use this parser so I can build a dashboard of customer info.

Googling to figure out how to build a parser for this has given me some interesting results. I would also appreciate any links you can send to me. I am building this in the Kibana grok debugger.

@ajmcateer, welcome to the forum!
The grok debugger in Kibana is primarily there for building and debugging grok patterns that are used in ingest phase. If your log data hasn't already been parsed before ingest, I recommend reindexing it into another index and use a series of pipeline processors to parse out the data that you are interested in.

The great thing about reindexing from one index to another is that it leaves the original index untouched. That means you can reindex into as many indexes as your cluster allows.

There's a whole list of built-in processors (take a look at the ones listed in on the right of the link to the pipeline processors above) you can use to split data based on a separator, remove and add new fields and also do transforms. For grok processors, here's a list of the grok patterns you can use.
You'll have to play around a bit, and you can either do that in the grok debugger or in the dev tools in Kibana.
Good luck!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.