I'm creating new logs (using log4net) and I was wondering if there is a standard where you can create an entry that Kibana will be able to come up with the available fields automatically?
I just spotted this in another post:
kv {
value_split => ":"
field_split => ","
}
Would that be the answer? It looks like a Key Value split... perhaps its custom?
Where is the message string constructed? In the application itself or the logging stack? Ideally we'd be able to avoid serializing it in the first place, but if that's not possible a kv filter is probably the best bet.
{"date":"2017-12-15T11:34:38.1718475+00:00","level":"INFO","appname":"Default Web Site","logger":"DAL.DBManager","thread":"22","ndc":"(null)","message":"","Function":"ExecuteDataReader"}
{"date":"2017-12-15T11:34:38.4541514+00:00","level":"INFO","appname":"Default Web Site","logger":"DAL.DBManager","thread":"22","ndc":"(null)","message":"","Function":"Dispose"}
Magnus, now that I have JSON as my logs, what do I need to do next? Is it as easy as some sort of filter in logstash.conf?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.