Make Grok filter fields aggregatable


#1

Hello,

I have succesfully filtered out fields needed from logs by using GROK.The data within these field shown in stdout are correct,however these fields(error code &loglevel) are not aggregatable in kibana.How do I change this?

Config file:
input {

file {
path => "C:\Users\bob\Downloads\data\error_test.txt"
start_position =>"beginning"
sincedb_path => "/dev/null"

}

}

filter {
grok{
match => ["message", "%{TIMESTAMP_ISO8601:logdate} %{LOGLEVEL:loglevel} %{GREEDYDATA:messsage} %{WORD:YO}%{GREEDYDATA:messsages} %{INT:errorcode}\r"]
}
date {
match => ["logdate", "yyyy-MM-dd HH:mm:ss,SSS", "ISO8601"]
}

}

output{
elasticsearch {
hosts => ["localhost:9200"]
index=> "error_test"

}
stdout { codec => rubydebug }
}


#2

have tried using:
mutate{convert => ["errorcode","integer"]}

it does not work either


(Magnus B├Ąck) #3

Logstash's default index template (which applies to indexes matching logstash-*) provides .keyword subfields that can be used for aggregations. Alternatively, define your own index template where you force fields to be of a certain type.

have tried using:
mutate{convert => ["errorcode","integer"]}

This'll convert the type of the field in a document but you need to reindex your data to change the field's mapping in the index. If you're just testing things out you can simply delete your index.


#4

Thank You for the response.I tried refreshing field list within the management tab in kibana and this seems to solve the issue.


(system) #5

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.