Hi
I have
not starting watcher, upgrade API run required: .watches[false]
in my logs on 7.10
This
Implies that the issue occurs between 5x and 6x but this cluster was built this year and has never been 5x or 6x
how do I get out of this ?
Hi
I have
not starting watcher, upgrade API run required: .watches[false]
in my logs on 7.10
This
Implies that the issue occurs between 5x and 6x but this cluster was built this year and has never been 5x or 6x
how do I get out of this ?
Can you please post more of the log.
There is not much more of it but here it is. I just splatted the hostname
[2020-11-23T23:21:20,978][WARN ][o.e.x.w.WatcherService ] [xxxxxxxxxx] not starting watcher, upgrade API run required: .watches[false], .triggered_watches[true]
[2020-11-23T23:21:21,442][WARN ][o.e.x.w.WatcherService ] [xxxxxxxxxx] not starting watcher, upgrade API run required: .watches[false], .triggered_watches[true]
[2020-11-23T23:21:29,616][WARN ][o.e.x.w.WatcherService ] [xxxxxxxxxx] not starting watcher, upgrade API run required: .watches[false], .triggered_watches[true]
[2020-11-23T23:22:14,502][WARN ][o.e.x.w.WatcherService ] [xxxxxxxxxx] not starting watcher, upgrade API run required: .watches[false], .triggered_watches[true]
[2020-11-23T23:22:21,034][INFO ][o.e.m.j.JvmGcMonitorService] [xxxxxxxxxx] [gc][65637] overhead, spent [331ms] collecting in the last [1s]
[2020-11-23T23:23:14,978][WARN ][o.e.x.w.WatcherService ] [xxxxxxxxxx] not starting watcher, upgrade API run required: .watches[false], .triggered_watches[true]
[2020-11-23T23:23:26,619][WARN ][o.e.x.w.WatcherService ] [xxxxxxxxxx] not starting watcher, upgrade API run required: .watches[false], .triggered_watches[true]
[2020-11-23T23:24:12,068][WARN ][o.e.x.w.WatcherService ] [xxxxxxxxxx] not starting watcher, upgrade API run required: .watches[false], .triggered_watches[true]
[2020-11-23T23:24:17,747][WARN ][o.e.x.w.WatcherService ] [xxxxxxxxxx] not starting watcher, upgrade API run required: .watches[false], .triggered_watches[true]
[2020-11-23T23:25:20,021][WARN ][o.e.x.w.WatcherService ] [xxxxxxxxxx] not starting watcher, upgrade API run required: .watches[false], .triggered_watches[true]
This has magically stopped now. It stopped suddenly like this
[2020-11-26T05:19:36,007][WARN ][o.e.x.w.WatcherService ] [xxxxxxxxxx] not starting watcher, upgrade API run required: .watches[false], .triggered_watches[true]
[2020-11-26T05:20:00,497][INFO ][o.e.m.j.JvmGcMonitorService] [xxxxxxxxxx] [gc][259561] overhead, spent [491ms] collecting in the last [1s]
[2020-11-26T05:20:58,446][WARN ][o.e.x.w.WatcherService ] [xxxxxxxxxx] not starting watcher, upgrade API run required: .watches[false], .triggered_watches[true]
[2020-11-26T05:20:58,665][WARN ][o.e.x.w.WatcherService ] [xxxxxxxxxx] not starting watcher, upgrade API run required: .watches[false], .triggered_watches[true]
[2020-11-26T05:22:05,547][INFO ][o.e.m.j.JvmGcMonitorService] [xxxxxxxxxx] [gc][259686] overhead, spent [270ms] collecting in the last [1s]
[2020-11-26T05:24:34,678][WARN ][o.e.m.j.JvmGcMonitorService] [xxxxxxxxxx] [gc][259835] overhead, spent [520ms] collecting in the last [1s]
[2020-11-26T05:30:50,359][INFO ][o.e.m.j.JvmGcMonitorService] [xxxxxxxxxx] [gc][260210] overhead, spent [510ms] collecting in the last [1s]
[2020-11-26T05:30:59,362][INFO ][o.e.m.j.JvmGcMonitorService] [xxxxxxxxxx] [gc][260219] overhead, spent [380ms] collecting in the last [1s]
[2020-11-26T05:35:29,254][INFO ][o.e.m.j.JvmGcMonitorService] [xxxxxxxxxx] [gc][260488] overhead, spent [478ms] collecting in the last [1.2s]
[2020-11-26T05:43:13,890][INFO ][o.e.m.j.JvmGcMonitorServ
Maybe its only complaining when watches are triggered.
What's the output from _cat/indices/.watch*?v
?
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
green open .watches T2c6tGtDSPib7PwxNoXD3A 1 1 6 0 516.8kb 258.4kb
I'm also running into this issue on a cluster that's never been anything but 7x (upgraded from 7.9 to 7.10), but my issue didn't resolve itself. What should I do to debug this?
The output from _cat/indices/.watch*?v
is:
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
green open .watches 46BRfq43TRah1qPTlZgbQg 1 1 0 0 39.2kb 19.6kb
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.
© 2020. All Rights Reserved - Elasticsearch
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant logo are trademarks of the Apache Software Foundation in the United States and/or other countries.