Mismatch in field type

(Nishanth Raj) #1

Hi Team,
I am new to ELK stack and i have completed setting up all ELKStack components such as filebeat ,logstash, Elastic and kibana on a single server and was successful in all my trails attempts, however when trying to feed apache access logs through simple grok pattern COMBINAPACHE, the field type of response and bytes are shown as "string" in kibana, i have tried changing it through mapping api, but unable to do so. Any quick help here can be of great value. Thanks

Actual log format:

0 - - [07/Mar/2004:16:05:49 -0800] "GET /twiki/bin/edit/Main/Double_bounce_sender?topicparent=Main.ConfigurationVariables HTTP/1.1" 401 12846 "referrerstring" "browserinfo" "repeat of som junk details"

My grok pattern:

grok { match => { "message" => "%{CISCO_REASON:ignore1} %{COMBINEDAPACHELOG} %{GREEDYDATA:ignore2}" }


COMBINEDAPACHELOG does seem to parse bytes as a string. You could use a mutate+convert filter to make it an integer. However, that may get a mapping conflict until the daily log rolls over.

(Nishanth Raj) #3

Thanks i am able to convert the response and byte information into numbers through mutate file, however the timestamp field in the log is still considered as string and mutate doesn't allow us to convert the timestamp into date field.


You would use a date filter for that.

(Nishanth Raj) #5

Thanks !!!, But i have very strange issue now which is not letting me create indexes. the moment i start my elastic service, its showing a very strange message like recovering indexes and the below message and it starts creating unwanted index for every single day in the month(2 months)

2018-07-16T23:01:50,456][INFO ][o.e.c.m.MetaDataCreateIndexService] [Demonstrate] [logstash-2018.06.20] creating index, cause [auto(bulk api)], templates [logstash], shards [5]/[1], mappings [defult]
[2018-07-16T23:01:50,545][INFO ][o.e.c.m.MetaDataMappingService] [Demonstrate] [logstash-2018.06.20/48EadkjOR0iGTHTIvk9LLw] create_mapping [doc]

I did go through some man pages and tried deleting those index and but the problem seems to persist. Can you help me on this too...


I am not sure I understood that, but if you use a date filter to set the @timestamp field, then using the defaults, an elasticsearch output will create indexes which match the @timestamp field. So if you index data from the last couple of months then you will get indexes for every day from the last couple of months.

(Nishanth Raj) #7

Alright !!, but i could still see my timestmap and clientip field listed as strings in kibana..

(system) #8

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.