it looks like you're trying to perform the json decoding in two places.
If the log file contains plain text lines like the example you gave, the filebeat json processor would not be able to decode it (since the line doesn't solely contain just a valid json object).
The logstash configuration looks reasonable, except that it contains a : where I don't see one in the example message.
I would recommend choosing one of two options:
Send the data from filebeat to logstash to elasticsearch, with filebeat performing no processing and logstash performing the grok and json decoding.
Send the data from filebeat to elasticsearch directly and perform the grok and json decoding in an ingest pipeline.
You seem to be very close to getting the first option working. If you are comfortable configuring logstash and want to run it as a separate process, it's a reasonable choice. If you don't actually need logstash for anything else (right now and in the foreseeable future), learning how to write an ingest pipeline could save a bit of administrative overhead.
The ingest node documentation itself is part of the elasticsearch documentation. By default, each elasticsearch node is also an ingest node.
An ingest pipeline consists of a sequence of processors that are very similar to logstash filter plugins. In your case the Grok and JSON processors should be of particular interest.
Let us know if you encounter any further roadblocks.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.