Thanks, Noémi, that was fast!
There are two reasons for using Logstash, but I can easily be persuaded to go back to my old configuration. I would prefer to skip Logstash since that would be less processing on the ELK server.
- I couldn't figure out how to setup the grok in default.json. It seems like this is where I should set things up to parse out the extra field(s) I want. There is a customerId in the query string I want extracted to its own field.
- I would like to enable other teams at our organization to update the ETL stuff. A Logstash pipeline is one location. Otherwise each server would need to be updated, which is not a big deal.