I am wondering , how can i parse below sample event into relevant fields in logstash ?currently i am not seeing any fields from this log in Kibana UI
{"COMPILATION_TIME":40,"DATABASE_ID":19,"DATABASE_NAME":"EEERD","END_TIME":"2019-09-25 07:19:22.397 -0400","EXECUTION_STATUS":"SUCCESS","EXECUTION_TIME":13,"INBOUND_DATA_TRANSFER_BYTES":0,"OUTBOUND_DATA_TRANSFER_BYTES":0,"QUERY_ID":"018f1f27-030c-c5e0-0000-18a11974c28e","QUERY_TAG":"","QUERY_TEXT":"GRANT REFERENCES ON EEERD.WS_EDT_DATA_DEV.EDTCNTLPORTCYCBLZTMP3 to role EEER_DB_CATALOG;","QUERY_TYPE":"GRANT","QUEUED_OVERLOAD_TIME":0,"QUEUED_PROVISIONING_TIME":0,"QUEUED_REPAIR_TIME":0,"ROLE_NAME":"EEER_DB_DEPLOY","SCHEMA_NAME":"INFORMATION_SCHEMA","SESSION_ID":27079812049690,"START_TIME":"2019-09-25 07:19:22.314 -0400","TOTAL_ELAPSED_TIME":83,"TRANSACTION_BLOCKED_TIME":0,"USER_NAME":"ABCZXGT","WAREHOUSE_ID":49,"WAREHOUSE_NAME":"RE_TY_USER_RRR_WH"}
my end goal is to parse the sample log i have shown above into relevant separate fields by using "GROK" pattern or in some other way.for example like below
The example event you provided seems to be JSON-encoded. That means that you could tell logstash to decode it when reading by setting the jsoncodec option on your input configuration. That should make the individual fields available on the event document.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.