So we have log files from SQL and Oracle logs. I want to know what the typical fields are that we need to extract (annotate) through logstash before sending it for indexing. Not sure if Filebeat has that domain intelligence.
The idea is to go behind very specific business use casess through these annotations. Any help would be highly appreciated
You either need Filebeat with an Elasticsearch ingest pipeline or Filebeat with Logstash.
What fields can and should be extracted depends on what's available and what you want to do with the logs. Most log formats offer a timestamp, a level, possibly a source (like a logger name), and a free-form message, so that's a starting point.
I am looking beyond timestamp, level, source etc. For example, process IDs, State information etc. Things which an SME looks into while investigating an issue. I understand the "how" to extract part. what I am struggling with is "What" :). I am wondering if Microsoft (For SQL) has documented these "fields" somewhere. I will keep looking.
Thanks
Sanjeev
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.