In some of my logs, I get a message body that is just a large json object. In some of those json objects, I have a field coming in that has data in the format of YYYY/MM/DD
E.g.
"param_TRANSACTION_STRINGTEST"=>"2017/01/27"
The json filter seems to be parsing it just fine.
This is not the field I am using for the date of the log
As such, I don't really care whether or not its indexed as a date type. However Elasticsearch seems to be determined to index it as a date, and since apparently 'YYYY/MM/DD' is an invalid date format, I get the Mapper Parsing Exception:
"error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse [param_TRANSACTION_DATE]", "caused_by"=>{"type"=>"illegal_argument_exception", "reason"=>"Invalid format: \"2017/01/27\" is malformed at \"/01/27\""}}}},
Currently the only way I've been able to solve this is by going and adding an explicit date conversion block on this field - forcing it to become a date, and telling it what format it is.
However, this is impractical as the fields that I'm receiving in the JSON object are not a static set of fields, I could randomly start getting a new field with a new name, with another date-like string in it.
I can't figure out what is causing Elasticsearch to be so dead-set on reading this in as a date however, so I don't know how to change the default behavior.
Just to clarify - there are a few other threads out there I've found where the default response is 'use a date filter to convert it to a date', and yes I understand that is a solution, but please keep in mind that's not my issue here - this isn't my primary date field. I want this to be a string. In fact, I'd prefer any date fields that come through in the JSON object to be strings and then later, if I want to use them as date objects, that is when I would want to go back and make a one-off case of converting it to a date.