How to extract index name from the uri field in logstash


(Jose pal) #1

Hi,

the api and index fields are not being populated in the request logs.
for this i am using the custom pattern grok rules to extract the index and api values from the uri field like below. some how it doesn't works for me. any suggestions/help would be appreciated.


(Magnus Bäck) #2

And what does an example input line look like?

When and how are the esindex and esapi fields set?


(Jose pal) #3

sorry for confusing here. actual fields are index and api. i just renamed it to only index and api. please let me know if i have missed anything here,


(Magnus Bäck) #4

You missed answering both my questions.


(Jose pal) #5

Hi Magnusbaeck,
now i have updated the issue with proper exact field names. i hope now you understand my issue. and one more thing Grok pattern working fine in grok debugger but the same pattern is not working when running with logstash

please have a look and let me know what i am missing/need to add.


(Jose pal) #6

my example input is like:: 1. /test/_search


(Magnus Bäck) #7

What does a stdout { codec => rubydebug } output produce for an example input line? I haven't seen proof that the uri field contains the expected string.


(Jose pal) #8

this is my output file looks like::


(Magnus Bäck) #9

Your grok filter appears to be the very first filter. Does the uri field really have the expected value at that time?


(Jose pal) #10

Magnus i am not sure, also suspecting that uri parsing is not happening here. what should i do now? any suggetion?


(Magnus Bäck) #11

I would expect you to know what kind of data enters Logstash. If it's JSON data that you're parsing with the json filter you obviously need to put the json filter before any filters that want to process fields found in the JSON string.


(Jose pal) #12

yes Magnus. my data is in Json format and i do use below Json filters as well. but still it is not working.


(Magnus Bäck) #13

Yes, but have you changed your configuration file so the json filter is listed before the grok filter?


(Jose pal) #14

Thanks for your reply Magnus.


(Magnus Bäck) #15

grok {
json {
break_on_match => false
keep_empty_captures => true
match => [ "uri", "(^/)" ]
match => [ "uri", "(/)" ]
tag_on_failure => []
}
}

No! Nowhere did I suggest you put the grok filter inside the json filter.

Look, this is very simple. Filters are executed in the order listed in the configuration. If you have one filter that creates a new field like uri (e.g. by parsing a JSON string) and another filter that attempts to parse that field the second filter needs to come after. Since you still haven't provided an example line from your log you've left me guessing at how your configuration is supposed to work. Over and out, and good luck.


(Jose pal) #16

Magnus sorry for missing the input log file here. i thought already i have attached. please find the same here.


#17

Before you worry about the filters, you should fix your input so that it correctly assembles events. The log you show has the timestamp on the line after the JSON. So your multiline codec configuration should have negate => false.

If you use negate => true then the first line of the log (the JSON) will appear in a separate event, which will massively confuse you as you try to debug the configuration.

Once you are assembling events correctly I would recommend you start with a dissect filter to break out the JSON from the timestamp, log level, etc. Then a json filter to parse the JSON.


(system) #18

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.