I'm very new to all this ELK stuff and I'm having a lot of trouble getting logstash to parse my file. I followed a tutorial and managed to get filebeat to send my logs to logstash and I can see them in kibana. Now I'm trying to use grok to split out a bunch of items into their own fields but nothing seems to be working.
The lines from my logs have spaces after the words and then a tab character just before the next entry (and sometimes there may be nothing in a given field). I've also got a few lines at the top of the file which I need to exclude completely but I have no clue how to do that (unless the grok thing will handle it although the very first line in the file may be a match because it's a date and time)
Below is a sample line from the log along with the relevant .conf files for logstash. I can't figure out what I'm doing wrong. Can anyone help?
Time Category Severity Entry type Local time UTC time Machine Application type User Entity Entity type Entity guid Details
3:14:34 PM Security Information Entity created 5/28/2015 3:14:34 PM 5/28/2015 7:14:34 PM MYMACHINE MyApp 1.1.1.1 - myentity myentitytype {00000000-0000-0000-0007-0050F9100E7F} My detials
in the "mypatterns" file
TIMESTAMP_12HOUR %{TIME} (AM|PM)
DATETIME_12HOUR %{DATE_US} %{TIME} (AM|PM)
MULTIWORD .[^\t]+
first I ran also into the same problem when started with logstash. So far your match- function looks fine, except using blanks (" ") between the predefined patterns.
If you use "\s" instead of blanks " ", this should work. Remember that the everything in the message field must exactly match with your patterns.
So for example if your message- field looks like this:
Time: 3:14:34 PM Category:Security Severity:Information Entry type:Entity created
and your patterns file looks like the one you've mentioned, the match field in your filter should look like:
So your matching pattern starts with the word time, since you wanna filter this word, you just write "Time:". The "\s" is a replacement for the withespace/blank character. Then you save the time in the "localtime" field. This is followed by a withespace again ("\s"). Your pattern continues with the word Severity, so you just type "Severity:" and save the value in the "severity" field. Again a withespace character ("\s") followed by the "Entry type" pattern...and so on.
However the logs entry after that aren't being picked up because it's a lot of white space (spaces) and tabs between between the hostname and the next field with real data. Is there a trick to pickup/skip all that whitespace?
I also saw my MULTIWORD:type, may not be correct. It picks up the Start Logging but it also gets all the spaces after it and I can't figure out the correct regex to just get the two works and skip all the whitespace after it. Do you know the correct syntax for that?
However the logs entry after that aren't being picked up because it's a lot of white space (spaces) and tabs between between the hostname and the next field with real data. Is there a trick to pickup/skip all that whitespace?
The only solution which comes into my mind at the moment is to use the mutate function. I've never used it before, I just read that you can replace characters , maybe it's a got approach to get rid of all the withspaces.
However the logs entry after that aren't being picked up because it's a lot of white space (spaces) and tabs between between the hostname and the next field with real data. Is there a trick to pickup/skip all that whitespace?
The good news is I think I figured out a grok pattern that works, the bad news is I don't see any new fields in Kibana.
In Kibana, under the discover tab, I see the new log entries but when I expand one, all the extra fields i defined in my grok pattern are not there. Is there any way I can debug this to see what i'm not getting all the extra fields?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.