Kibana not showing dictionary output of log events

Hi there,

This is my first post here. I went through many conversations you guys have but I am still struggling with making my ELK working as I'd like it to.

I am using ELK 6.5.4. version, running everything with docker-compose

Log event that I have is following :

93.136.229.0 - - [23/Jan/2019:14:31:16 +0000] "GET / HTTP/1.1" 200 3476 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.13; rv:64.0) Gecko/20100101 Firefox/64.0"

Grok pattern that I have is following:

%{IPORHOST:remote_ip} - - \[%{HTTPDATE:access_time}\] \"%{WORD:http_method} / HTTP/%{NUMBER:http_version}\" %{NUMBER:response_code} %{NUMBER:body_sent_bytes} %{GREEDYDATA:msg}

While this works great in grok construct and grok debug sites i.e. everything gets broken into key:value output like dictionary but this is not the case with Kibana. In Kibana everything ends up in message field.

I have tried kv filter with add_field option but I guess I am doing something wrong since I don't get values in Kibana next to the wanted field. Instead of that I get:

`access_time:%{[doc][access_time]}`

Here are the links I was following:

My grok in Logstash config file looks like the one below:

filter { if [type] == "apache" { grok { match => { "message" => "%{IPORHOST:remote_ip} - - \[%{HTTPDATE:access_time}\] \"%{WORD:http_method} / HTTP/%{NUMBER:http_version}\" %{NUMBER:response_code} %{NUMBER:body_sent_bytes} %{GREEDYDATA:msg}" } } } }

I have given up adding kv filter or anything else since I don't know anymore if:

  • kv filter goes under the same filter that grok does
  • anything else is needed that kv recognizes all the keys and values in grok pattern

I also don't understand very well how does Logstsh recognize the log event from above, my beginner's conclusion is that it comes in plain format and not in json. If I add kv filter as a standalone below the one where grok is then Kibana picks up IP of the host as the key and everything else as the value and this is where my struggle of misunderstanding begins.

If anyone would be so kind telling me what am I doing wrong and showing me:

  • simple example how to do a proper syntax inside logstash config so when I open Kibana I get:

access_time: 28th of January 2019
remote_ip: 193.74.34.5

instead of message: Entire log event as is

or

access_time:%{[doc][access_time]}

Not to forget I am using filebeat 6.5.4 as input, Elasticsearch:9200 and rubydebug codec as output

The logstash config you posted indicates that everything should go into the message field. Do you see any other fields in your output?

Can you query your index directly using Kibana devtools and paste a sample of the output here?

Hi,

Sorry, don't know how to work properly with dev tools because I get errors when trying to work with it. When I do curl against index on the server end I get proper output out.

If you have an example what can I type in dev tools please let me know.

Everything I put in there just errors out even index.

Here is what happens when I try to imitate curl command.

GET '/logstash-*/_search?pretty'
{
"query": {
"match_all":
}

Output is:

{
"error": {
"root_cause": [
{
"type": "json_parse_exception",
"reason": "Unexpected character ('}' (code 125)): expected a value\n at [Source: org.elasticsearch.transport.netty4.ByteBufStreamInput@48ded050; line: 4, column: 4]"
}
],
"type": "json_parse_exception",
"reason": "Unexpected character ('}' (code 125)): expected a value\n at [Source:org.elasticsearch.transport.netty4.ByteBufStreamInput@48ded050; line: 4, column: 4]"}, "status": 500}}`

How do you do this correctly?

When I curl it from server end I get a proper output and I am a bit closer because I have changed the filter and most of the key:value gets out but it just chops out browser information. I don't understand why was the message field such an issue.

New filter:

filter {
grok {
match => ["message", "%{COMMONAPACHELOG:msg}"]
remove_field => "message"
}
kv {
source => "msg"
remove_field => ["msg"]
field_split => ", "
value_split => "="
}
}

You can perform a full search in dev tools using:

POST logstash-*/_search
{
  "query": {
    "match_all": {}
  }
}

When I curl it from server end I get a proper output and I am a bit closer because I have changed the filter and most of the key:value gets out but it just chops out browser information.

Try using COMBINEDAPACHELOG instead of COMMONAPACHELOG. The Combinded version appears to include the user agent:

When I do that I get the same error as with GET method

{
"error": {
"root_cause": [
{
"type": "json_parse_exception",
"reason": "Unexpected character ('}' (code 125)): expected a value\n at [Source: org.elasticsearch.transport.netty4.ByteBufStreamInput@17af3b18; line: 4, column: 4]"
}
],
"type": "json_parse_exception",
"reason": "Unexpected character ('}' (code 125)): expected a value\n at [Source: org.elasticsearch.transport.netty4.ByteBufStreamInput@17af3b18; line: 4, column: 4]"
},
"status": 500
}

COMBINEDAPACHELOG worked so output looks good for this one.

Is there any chance that new Kibana doesn't work so well or is there some kind of a more elegant solution ? I ask that question because I am still surprised why did message field in grok pattern bother it so much. Has syntax changed in that regard changed or?

What I am surprised with is that I needed to use kv filter as an addition as well.

After removal message field with remove_field => "message" it all works i.e. Kibana doesn't shove entire log event into message field. How can that be and why?

For a n00b like me some things are not clear around that. If you can please explain?

When I do that I get the same error as with GET method

Can you post a screenshot of what DevTools looks like when you get that error? The example I gave was a direct copy/paste from mine, which worked for me.

I ask that question because I am still surprised why did message field in grok pattern bother it so much. Has syntax changed in that regard changed or?

This likely has nothing to do with Kibana itself. The message field is provided by logstash in this case, and Kibana will display whatever data it has available to it. If you provide screenshots of the problems you were facing, that would give me a better idea of where the disconnect it.

It looks like you also removed the if [type] == "apache" clause from your filter. I wonder if this helped to fix your problem too? It's possible that the type wasn't being identified as apache. What happens if you remove the kv section? Does it still work?

I will try all of this (you wrote) tomorrow and paste everything here. Those are really good questions.

I have removed if conditioning without noticing by using trial and error method and copy/pasting different solutions from the Elastic forum.

And yes, if [type] == "apache" was the biggest culprit, I have tried multiple versions of docker ELK now to debug it by following your questions. When I have turned on if conditioning back it was all in the message field again instead of dictionary view. Now I finally know the reason.

I have used dev tools as well on the private browsing window and it was all ok, so yesterday must have been browser cache issue.

My conclusion with kv filter is that it cleans out uneccessary characters like "/,.etc

I do have to admit that I have noticed that document_type will become obsolete in the future ELK editions. I was doing if conditioning based on that. I have set document_type in filebeat like this:

paths:
- /var/log/apache2/access.log
document_type: apache

And my expectations were that if [type] == "apache" in logstash-config file would recognize it as if there is apache log then match this, etc. I guess I got it wrong.

Now I have questions on types and when I have multiple sources i.e. log files from different servers. How to split them so logstash puts them in the right spot? I have started with conditioning because I have noticed in Kibana that log events ended up in the wrong place.

I do understand that they are all type: log in filebeat but still haven't figured out how to differ them both in filebeat and logstash-config file. Any info or direction would be good.

I have figured out how to do a proper if condition with tags, here is example below:

Filebeat part:

paths:
- /var/log/apache2/access.log
tags: ["apache"]

Logstash part:

filter {
if "apache" in [tags] {
grok {
match => ["message", "%{COMBINEDAPACHELOG}"]
}
}

}

Also, thanks very much for your help and for right questions which brought me to the solution that works.

Excellent, happy to help!

I do have one more thing though. I am fixing bigger logstash config where I have custom grok patterns.

In Filebeat I have multiple log files and some of them are showing output in Kibana ok, and some not. I can suspect that it takes time for Filebeat to send stuff but just want to make sure if following is legit in logstash-config file.

if "something-1" in [tags] {
grok {
}
}
.
.
.
.
if "something-N" in [tags] {
grok {
}
}

I have this set multiple times i.e. I have multiple if conditions set like that. So, was wondering if that is ok by logstash or I should include else if as well?

Not sure how logstash checks multiple if statements, like programming or it checks them all and if they match I get output?

You'll probably have better luck asking that in the Logstash topic. I'm really not all that familiar with how Logstash interprets conditional configuration. If I had to guess, I'd say that you'd need an else if you didn't want both conditions to potentially execute

Ah, ok, I'll open a new question under Logstash category then. Thanks

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.