Watcher Metadata Error for Date Calculations



I'm working on a team that has set up an Elastic cluster (6.3.2) and we've ingested a group of files containing old Bro data. We've previously tested Watcher with this data by using the following metadata block to simulate watches on the old data.

"metadata": {
"min_error_threshold": 100,
"time_interval" : "5m",
"num_time_intervals" : 12,
"time_period" : "1h",
"time_zone" : "-10:00",
"time_interval_start": "now-6y-7M-27d-20h-35m",
"time_period_start": "now-6y-7M-27d-21h-30m",
"time_period_end": "now-6y-7M-27d-20h-30m"

Recently, we added a second Kibana instance to the cluster (both are in Docker, same version). When we run the watch in the Kibana Dev Tools Console for the second Kibana instance, we receive the following response (same issue for time_interval_start, time_period_start, time_period_end):
"error": {
"root_cause": [
"type": "mapper_parsing_exception",
"reason": "failed to parse [metadata.time_interval_start]"
"type": "mapper_parsing_exception",
"reason": "failed to parse [metadata.time_interval_start]",
"caused_by": {
"type": "illegal_argument_exception",
"reason": "Invalid format: "now-6y-7M-27d-20h-35m""
"status": 400

This query still works fine in our original Kibana instance. Looking at the Kibana logs for the second instance, I see the following error which looks related:
Deprecation warning: value provided is not in a recognized RFC2822 or ISO format. moment construction falls back to js Date(), which is not reliable across all browsers and versions. Non RFC2822/ISO date formats are discouraged and will be removed in an upcoming major release. Please refer to for more info.
[0] _isAMomentObject: true, _isUTC: false, _useUTC: false, _l: undefined, _i: Sat Mar 17 2012 12:00:00 GMT-1000, _f: undefined, _strict: undefined, _locale: [object Object]
at Function.createFromInputFallback (/usr/share/kibana/node_modules/moment/moment.js:324:94)
at configFromString (/usr/share/kibana/node_modules/moment/moment.js:2366:11)
at configFromInput (/usr/share/kibana/node_modules/moment/moment.js:2592:9)
at prepareConfig (/usr/share/kibana/node_modules/moment/moment.js:2575:9)
at createFromConfig (/usr/share/kibana/node_modules/moment/moment.js:2542:40)
at createLocalOrUTC (/usr/share/kibana/node_modules/moment/moment.js:2629:12)
at createLocal (/usr/share/kibana/node_modules/moment/moment.js:2633:12)
at hooks (/usr/share/kibana/node_modules/moment/moment.js:16:25)
at parse (/usr/share/kibana/src/core_plugins/timelion/server/lib/date_math.js:50:33)
at validateTime (/usr/share/kibana/src/core_plugins/timelion/server/handlers/lib/validate_time.js:19:40)
at Object.processRequest (/usr/share/kibana/src/core_plugins/timelion/server/handlers/chain_runner.js:193:33)
at handler (/usr/share/kibana/src/core_plugins/timelion/server/routes/run.js:22:64)

Has anybody run into the same issue? Or does anyone know of any updates that may have caused time values like those in our metadata block to no longer parse correctly?

(kulkarni) #2

@spinscale - do u think this is watcher related ? can u plz help in that case or re-direct appropriately.


(Alexander Reelsen) #3


can you share the output of

GET .watches/_mapping

My suspicion here is the following: The field metadata.time_interval_start is treated as a date and thus running into problems when you are trying to index a string.



Thanks Alex. I ran the GET request for each Kibana instance and confirmed your suspicion. Our first instance has the time_interval_start parameter as type "text", while our second instance has the time_interval_start parameter as type "date".

The same watch was entered for both instances; however, for the first instance, the DevTools UI was used to create the watch, while for the second instance, the Watch UI (through Management > Watcher) was used to create the watch. Is this known behavior for the mappings to act differently between creation methods?

(Alexander Reelsen) #5

So the metadata fields are mapped dynamically, which means that the value of first document that gets indexed is used to determine the type of the field. On one cluster this seems to have been a valid date, where as on the other it seems to have been a string.

You cannot change this type without deleting the index. So either you do that an reindex with the appropriate type, or you use another field.

hope this helps!