I am having records/documents of variable lengths, say 3 types of logs coming in a single log fie.
What is the grok pattern that can match variable length documents from a single log file?
This grok pattern can be used in the kibana interface for creating index and ingest pattern using the 'upload a sample file' feature provided by kibana.
You can have multiple patterns as a list in a grok. If you set break_on_match => true it will exit after the first one that matches. Then you just need to construct each pattern so it will only match one of the line types.
i modified the grok in Ingest/pipeline to include the 2 patterns the log file was containing, and it worked. it accepted all entries in the log file from filebeat. i am not using logtash,
but where to apply "break_on_match => true " couldnt find any similar one in Ingest/pipeline syntax.
the followinng is the ingest/pipelne modification i made. hope it is ok
Just a tip, you do not need grok to parse a message like this, from what you shared it seems to be a Fortigate log event, which is a message with a key-value pairs.
Looking at your grok pattern you could combine everything with a dissect processor and kvprocessor.
Your dissect processor could be something like this:
This is a great idea, Thanks a lot , i am just a beginner here
As you told I am trying to handle fortinet logs.
I removed the grok processor and added the dissect and kv processors. for the ingest node
but nothing gets logged in the index(I didnt change the index)
Can you plz explain as how to trobleshoot. My ingest node simulate syntax is not working . It gives error for "field": "message" itself
Sir,
Thank you for the reply
As you told it seems that it always breaks on the first match and all log entries are indexed correctly. but as suggested in the earlier reply separate processors for dissect and kv can work better.
But it didnt show the additional fields(from and attachment , ie, from="aditya@gmail.com" attachment="yes" ) whlle viewing the index using discover, even though it showed all 7 rows
Sir,
Actually all fields were viewable, only when i changed the options in the following way
Under Kibana- Discover- after selecting the index pattern, for the field names - Filter by type - The options " Aggregatable = yes" and " Searchable = yes" was selected .
Then the fields from ([from="aditya@gmail.com")and attachment (attachment="no")were NOT listed
Only when the options " Aggregatable = any" and Searchable = any " was selected ,
Then the fields from ([from="aditya@gmail.com)and attachment (attachment="no") got listed
Now the issue is HOW to make these fields come under Aggregatable = yes" and "Searchable = yes" ?
thanks for the support
shini
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.