You could probably do it using Filebeat processors, there is for example one to decode CSV fields, that was introduced in filebeat 7.2 and is able to split a string using a custom separator. The dissect processor can also be helpful to separate the rest of elements.
If you want to use grok, you can also use Elasticsearch ingest nodes instead of Logstash. They allow to define pipelines similar to the ones you could define with Logstash.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.