{
"error": {
"root_cause": [
{
"type": "parse_exception",
"reason": "could not read search request. value [SCAN] is not supported for field [search_type]"
}
],
"type": "parse_exception",
"reason": "could not parse [search] input for watch [ito_dcs]. failed to parse [request]",
"caused_by": {
"type": "parse_exception",
"reason": "could not read search request. value [SCAN] is not supported for field [search_type]"
}
},
"status": 400
}
We are planning to use watcher for forwarding serious errors from our logs to an external monitoring system. Basically I can try to set the trigger interval short and search size large enough, but the scan approach will fit our needs best.
our aim is to forward all fatal errors (ITOs) to external monitoring system through log file. We would like to trigger watch each Xmin and collect and forward all ITOs.
Naturally the count of all could vary. I would like to set some nice value like page=500 and would expect from watcher if there will be e.g. 501 ITO errors it will forward me the last 1 too.
Thanks for the link, didn't know it is going to be deprecated. But unfortunately if I am not wrong it seems that one will have to develop some wrapper around watcher and do the scroll himself.
But anyway with size 500 and watcher trigger each 1minute we will be hopefully probably enough safe with standard query_then_fetch.
FYI: Just specifying a type scan does not mean the search being executed by watcher automatically fetches all the results. You could however hand over the scroll id to another process, which then executes the real scan/scroll search, if you need to ensure you are catching all the results.
Apart from that, you can still check if total.hits is greater than 500 and inform the monitoring component to fetch data by itself, if you dont feel safe enough that way. Last but not least, increasing the size might also be valid in your case then.
Aaah, I believed in magic scroll in watcher :). Sad story, but I understand if it is done this way.
Unfortunately the monitoring component will not be able to call our log collector, we have to push all the data through files.
Big thanks for all the info here. Thus for now we will get frozen by some big enough size value and trigger often, if it will be not enough we will try to scroll somehow then.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.