Hi,
I wish to load data from a mule log file to elasticsearch via logstash.
The following is my log data pattern:
2018-04-26 14:35:51,016 [[cif].HTTP_Listener_Configuration.worker.01] INFO org.mule.api.processor.LoggerMessageProcessor - Process Initialised Corelation Id :ee7a310a-a45a-45cd-9159-caba7e6b97e1 , Flowname :XYZ,Process:PurchaseInfo,Source:S2,Target:T2,Status:SUCCESS End of Process
I wish to get only values of Flowname,Process,source,target,status as separate fields after parsing the data. Which filter can be used in logstash config file to achieve this and how it can done?
Its hard to help. You only gave one line of data. There are some outdated links if you search the internet but some of the info is relevant.
For instance, there is a JSON logging module for Mule. If you can switch to that then the LS config is much easier and LS performance much better.
If you can't use the JSON logging module, then we will need to see a few samples of (sensitive info scrubbed) the mule logs.
Depending on how regular the log format is you can use Dissect (more regular) or Grok (works for less regular) filters.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.