I apologise if this is a silly question. I have been reading the documentation and trying to understand how to use the ingest pipelines to get my data into fields. I have had some success, However I have become stuck with some types of custom logs and was wondering what I was missing.
As an example I am using a dissect pattern in my ingest pipeline such as
2022-12-16 04:25:24-0800 [-] LOAD - POST api call - https:///utilApp/webapi/secured//update?processcd=wady=23 20
2022-12-16 04:25:24-0800 [HTTP11ClientProtocol (TLSMemoryBIOProtocol),client] LOAD - post call response - b status success 20
That works well as there is no real data I need to separate into explicit fields and can just pass it to the message field.
the issue I am faced with is that same log file contains some interesting data that I would lie to put into ecs fields. I am however unsure in Elastic if I can treat certain log content differently to be able to process this.
but my other two lines are dropped with
reason":"object mapping for [additionaldata] tried to parse field [additionaldata] as object, but found a concrete value"}, dropping event!
I suspect it's due to the data structure not being the same. Is there away to put a conditional in my pipeline to only copy and process the field if the "Updating load" exists in message and for the other two lines just skip the "set" and "json" processors.
This would seem to be enough to now process the JSON data or skip it. So I got what I needed. Not sure this is a particularly good way to do it. So would be grateful if anyone could show me a better method.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.