Hello
I am new to Kibana and have difficulty reading the logs for visualization.
I have pasted a sample of how my log field looks.
I need to read the values inside the log for creating visualizations in Kibana, like where env is DEV, or where transactionId is xyz, or where payload contains a certain value. Could someone help me with how this is done? Thank you!
Pasting sample log field here:
2021-12-08T14:02:49.899 INFO [bwEngThread:In-Memory Process Worker-1] c.t.b.p.g.L.E.LogMessage - {"env":"<mark>DEV</mark>","appName":"GenLogs","transactionID":"449d8241-392f-4877-a5ea-dddeebed3c29","timestamp":"1638972169775","srcApplication":"EPIC","operation":"testLog","type":"INFO","message":"New message logged.","payload":"<timer:TimerOutputSchema xmlns:timer=\"http://tns.tibco.com/bw/activity/timer/xsd/output\"><Now>1638972169328</Now><Hour>2</Hour><Minute>2</Minute><Second>49</Second><Week>50</Week><Month>12</Month><Year>2021</Year><Date>2021-12-08</Date><Time>2:02:49 PM</Time><DayOfMonth>8</DayOfMonth></timer:TimerOutputSchema>"}
You will need to use ingest processors like grok or dissect and then json to parse it all out. If you are ingesting through Logstash you can also do it there.
If you can paste your entire log message in text format then someone might be able to help you out configuring it. Should be text, not a screenshot.
Then you can send the json_message field through a JSON filter and break that apart even further so that you can get individual fields in the root of your json object.
I had to remove logstash from the stack as customer wants only fluentbit and elastic.
I read that grok does not work with fluentbit. IS there any other way to achieve what I want with just fluentbit and elastic?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.