Can't even parse very simple data and get _grokparsefailure as a tag

i want to get started with ELK but i'm stuck every-time, after 2 weeks of complication i can finally ingest my sample log into elastic search, but fell into another problem, i can't see all my records in kibana and those who are sent, they are tagged as _grokparsefailure, it mean something wrong with my grok filter, in the following i will give details of my Lab

my sample log file is:
user1 email1 pass1
user2 email2 pass2
user3 email3 pass3

my logstash config file is :

input {
file {
path => "E:/ELK/Data/test.log"
start_position => "beginning"
type => "log"
codec => plain {charset => "ISO-8859-1"}
}
}

filter {
grok {
match => { "@message" => "%{WORD:username} %{WORD:email} %{WORD:hash}" }
}
}

output {
elasticsearch {
hosts => ["localhost:9200"]
index => "rain"
}
}

i tried this filter in Grok Debugger and it just work fine but it doesn't work with logstash
this is what i got in kibana :

as you can see the third record is missing and the available ones are tagged as _grokparsefailure
thank you for any help

The third line is probably missing due to you not having a newline after the last line. Also note that the field containing the data is named message and not @message, which is probably why the grok filter fails.

1 Like

Thank you christian, for the @ after message i just add it thinking it will solve the problem, it was before just "message"

you are right concerning the third line, after i hit enter after the last line it was sent, now the remaining problem is the tag _grokparsefailure, the @ after the message is not the problem i just add it

Did you change the field name in the grok filter?

1 Like

sorry i didnt get your point ??

my grok filter is :
filter {
grok {
match => { "message" => "%{WORD:username} %{WORD:email} %{WORD:hash}" }
}
}

the filter is correct, it worked in grok debugger as shown below :

Have you reprocessed your file to index the data with the new config?

1 Like

i didnt change the config Sir,

are you referring to the @ after message, if so, i can assure you that is not the cause cuz i just add @ after message and yes i reprocessed my file to index the data with the new config and still have the _grokparsefailure issue

I am not sure what you are doing wrong. This works for me:

input {
  generator {
    lines => ['user1 email1 pass1',
              'user2 email2 pass2']
    count => 1
  }
}

filter {
  grok {
    match => { "message" => "%{WORD:username} %{WORD:email} %{WORD:hash}" }
  }
}

output {
  stdout { codec => rubydebug }
}

1 Like

i think i have something wrong in my input file because data will parse successfully using generator but will not work in case of input file

this work successfully

input {
generator {
lines => ['user1 email1 pass1',
'user2 email2 pass2']
count => 1
}
}

filter {
grok {
match => { "message" => "%{WORD:username} %{WORD:email} %{WORD:hash}" }
}
}

output {
elasticsearch {
hosts => ["localhost:9200"]
index => "rain5"
}
}

but this didn't work

input {
file {
path => "E:/ELK/Data/source6.log"
start_position => "beginning"
type => "log"
codec => plain {charset => "ISO-8859-1"}
}
}

filter {
grok {
match => { "message" => "%{WORD:username} %{WORD:email} %{WORD:hash}" }
}
}

output {
elasticsearch {
hosts => ["localhost:9200"]
index => "hope"
}
}

What happens if you remove the input codec?

even worse, i still have _grokparsefailure issue and in addition to it i have an issue of codec

It is very odd. can you try with the dissect filter as well?

input {
  generator {
    lines => ['user1 email1 pass1',
              'user2 email2 pass2']
    count => 1
  }
}

filter {
  dissect {
   mapping => {
     "message" => "%{username1} %{email1} %{hash1}"
    }
  }

  grok {
    match => { "message" => "%{WORD:username} %{WORD:email} %{WORD:hash}" }
  }
}

output {
  stdout { codec => rubydebug }
}

1 Like

it worked but still have the tag loool
i think i have a problem in the grok filter
this is what i get :

you can see, i have the attributes username and email and hash but the damned tag still there

I do not know why the grok filter is having issues not what could be causing it. The only thing I can think of is the charset. I would recommend using the dissect filter instead. Change the field names and remove the grok filter.

1 Like

after removing the grok filter it work just fine, but i wonder why i can't get grok working ?? because i read that

Also dissect is preferably used in situations where number of fields are always the same otherwise grok is a better option.

unfortunately it is my case, because i have different lines in the same log
like this :

2018-10-23 12:27:47.93 spid54 Using 'xpstar.dll' version '2014.120.2000' to execute extended stored procedure 'xp_instance_regread'. This is an informational message only; no user action is required.
2018-10-23 12:29:32.49 spid54 Attempting to load library 'xplog70.dll' into memory. This is an informational message only. No user action is required.
2018-10-23 12:29:32.52 spid54 Using 'xplog70.dll' version '2014.120.2000' to execute extended stored procedure 'xp_msver'. This is an informational message only; no user action is required.
2018-10-23 13:45:21.71 Logon Error: 18456, Severity: 14, State: 7.
2018-10-23 13:45:21.71 Logon Login failed for user 'sa'. Reason: An error occurred while evaluating the password. [CLIENT: ]
2018-10-23 13:46:54.70 Logon Error: 18470, Severity: 14, State: 1.

Yes, for mixed types of data it is often easier to use grok. Let's see of someone else may have an idea about what is going on.

1 Like

thank you sir very much

i will move to centos 7 and install ELK, cuz i think it is crazy because of windows

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.