Errors after updating to 6.2.4

As suggested by the documentations I've made a rolling update from 5.2 from 5.6.9, and then from there to 6.2.4. Only Elasticsearch is at this state now, Kibana is still 5.6.9 and no Logstash is running at the moment.

Even though it seems to be running fine (new documents get indexed just fine), I get huge amount of error messages in my Elasticsearch logs. I won't copy and paste walls of texts here, but to give you some examples:

[2018-06-13T14:57:35,213][ERROR][o.e.x.m.e.l.LocalExporter] failed to set monitoring watch [ofkoEHUfTyOxi1yJz8Ylfg_xpack_license_expiration]
java.lang.IllegalArgumentException: cannot execute scripts using [xpack_executable] context

[2018-06-13T14:57:35,950][ERROR][o.e.x.m.e.l.LocalExporter] failed to set monitoring watch [ofkoEHUfTyOxi1yJz8Ylfg_kibana_version_mismatch]
java.lang.IllegalArgumentException: cannot execute scripts using [xpack_executable] context

[2018-06-13T14:57:35,953][ERROR][o.e.x.m.e.l.LocalExporter] failed to set monitoring watch [ofkoEHUfTyOxi1yJz8Ylfg_logstash_version_mismatch]
java.lang.IllegalArgumentException: cannot execute scripts using [xpack_executable] context

[2018-06-13T14:57:35,949][ERROR][o.e.x.m.e.l.LocalExporter] failed to set monitoring pipeline [xpack_monitoring_2]
org.elasticsearch.ElasticsearchException: java.lang.IllegalArgumentException: cannot execute scripts using [ingest] context

And so on.

Any help is appreciated.

If you need any more info, please, let me know.

As an update:

I have found out that I get these kinds of errors if I have the following setting in my elasticsearch.yml:

xpack.monitoring.enabled: true

If I have it set to false, then everything starts just fine.

So this leads me to believe that the issue is related to X-pack and monitoring.

There's this other setting related to watches:

xpack.watcher.enabled: true

This also throws an exception, but at least it doesn't flood the logs like the monitoring errors. Example:

[2018-06-15T15:03:50,773][ERROR][o.e.x.w.WatcherService ] [elastic-host] couldn't load watch [ofkoEHUfTyOxi1yJz8Ylfg_kibana_version_mismatch], ignoring it...
java.lang.IllegalArgumentException: cannot execute scripts using [xpack_executable] context
at org.elasticsearch.script.ScriptService.compile(ScriptService.java:305) ~[elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.xpack.watcher.condition.ScriptCondition.(ScriptCondition.java:57) ~[?:?]
at org.elasticsearch.xpack.watcher.condition.ScriptCondition.parse(ScriptCondition.java:67) ~[?:?]
at org.elasticsearch.xpack.watcher.Watcher.lambda$createComponents$2(Watcher.java:323) ~[?:?]
at org.elasticsearch.xpack.core.watcher.condition.ConditionRegistry.parseExecutable(ConditionRegistry.java:68) ~[?:?]
at org.elasticsearch.xpack.watcher.watch.WatchParser.parse(WatchParser.java:157) ~[?:?]
at org.elasticsearch.xpack.watcher.watch.WatchParser.parse(WatchParser.java:123) ~[?:?]
at org.elasticsearch.xpack.watcher.watch.WatchParser.parse(WatchParser.java:88) ~[?:?]
at org.elasticsearch.xpack.watcher.WatcherService.loadWatches(WatcherService.java:289) ~[?:?]
at org.elasticsearch.xpack.watcher.WatcherService.start(WatcherService.java:142) ~[?:?]
at org.elasticsearch.xpack.watcher.WatcherLifeCycleService.start(WatcherLifeCycleService.java:118) ~[?:?]
at org.elasticsearch.xpack.watcher.WatcherLifeCycleService.lambda$clusterChanged$3(WatcherLifeCycleService.java:174) ~[?:?]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:573) [elasticsearch-6.2.4.jar:6.2.4]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_121]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_121]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_121]
[2018-06-15T15:03:50,778][ERROR][o.e.x.w.WatcherService ] [elastic-host] couldn't load watch [ofkoEHUfTyOxi1yJz8Ylfg_logstash_version_mismatch], ignoring it...
java.lang.IllegalArgumentException: cannot execute scripts using [xpack_executable] context
at org.elasticsearch.script.ScriptService.compile(ScriptService.java:305) ~[elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.xpack.watcher.condition.ScriptCondition.(ScriptCondition.java:57) ~[?:?]
at org.elasticsearch.xpack.watcher.condition.ScriptCondition.parse(ScriptCondition.java:67) ~[?:?]
at org.elasticsearch.xpack.watcher.Watcher.lambda$createComponents$2(Watcher.java:323) ~[?:?]
at org.elasticsearch.xpack.core.watcher.condition.ConditionRegistry.parseExecutable(ConditionRegistry.java:68) ~[?:?]
at org.elasticsearch.xpack.watcher.watch.WatchParser.parse(WatchParser.java:157) ~[?:?]
at org.elasticsearch.xpack.watcher.watch.WatchParser.parse(WatchParser.java:123) ~[?:?]
at org.elasticsearch.xpack.watcher.watch.WatchParser.parse(WatchParser.java:88) ~[?:?]
at org.elasticsearch.xpack.watcher.WatcherService.loadWatches(WatcherService.java:289) ~[?:?]
at org.elasticsearch.xpack.watcher.WatcherService.start(WatcherService.java:142) ~[?:?]
at org.elasticsearch.xpack.watcher.WatcherLifeCycleService.start(WatcherLifeCycleService.java:118) ~[?:?]
at org.elasticsearch.xpack.watcher.WatcherLifeCycleService.lambda$clusterChanged$3(WatcherLifeCycleService.java:174) ~[?:?]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:573) [elasticsearch-6.2.4.jar:6.2.4]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_121]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_121]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_121]
[2018-06-15T15:03:50,819][ERROR][o.e.x.w.WatcherService ] [elastic-host] couldn't load watch [ofkoEHUfTyOxi1yJz8Ylfg_elasticsearch_version_mismatch], ignoring it...
java.lang.IllegalArgumentException: cannot execute scripts using [xpack_executable] context
at org.elasticsearch.script.ScriptService.compile(ScriptService.java:305) ~[elasticsearch-6.2.4.jar:6.2.4]
at org.elasticsearch.xpack.watcher.condition.ScriptCondition.(ScriptCondition.java:57) ~[?:?]
at org.elasticsearch.xpack.watcher.condition.ScriptCondition.parse(ScriptCondition.java:67) ~[?:?]
at org.elasticsearch.xpack.watcher.Watcher.lambda$createComponents$2(Watcher.java:323) ~[?:?]
at org.elasticsearch.xpack.core.watcher.condition.ConditionRegistry.parseExecutable(ConditionRegistry.java:68) ~[?:?]
at org.elasticsearch.xpack.watcher.watch.WatchParser.parse(WatchParser.java:157) ~[?:?]
at org.elasticsearch.xpack.watcher.watch.WatchParser.parse(WatchParser.java:123) ~[?:?]
at org.elasticsearch.xpack.watcher.watch.WatchParser.parse(WatchParser.java:88) ~[?:?]
at org.elasticsearch.xpack.watcher.WatcherService.loadWatches(WatcherService.java:289) ~[?:?]
at org.elasticsearch.xpack.watcher.WatcherService.start(WatcherService.java:142) ~[?:?]
at org.elasticsearch.xpack.watcher.WatcherLifeCycleService.start(WatcherLifeCycleService.java:118) ~[?:?]
at org.elasticsearch.xpack.watcher.WatcherLifeCycleService.lambda$clusterChanged$3(WatcherLifeCycleService.java:174) ~[?:?]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:573) [elasticsearch-6.2.4.jar:6.2.4]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_121]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_121]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_121]

There's one more block similar to this but I'm over the character limit.

As an addition, with the monitoring set to true, the the other node cannot reach the master, and floods the log with messages like these:

[2018-06-15T15:29:26,496][INFO ][o.e.x.m.e.l.LocalExporter] waiting for elected master node [{elastic-host}{52xjqnOdRVSli5XmyrCF1g}{dcGxnI4eRb-2ziF2A6_HtQ}{elastic-host}{10.10.10.10:9300}{ml.machine_memory=8202006528, ml.max_open_jobs=20, ml.enabled=true}] to setup local exporter [default_local] (does it have x-pack installed?)
[2018-06-15T15:29:28,534][INFO ][o.e.x.m.e.l.LocalExporter] waiting for elected master node [{elastic-host}{52xjqnOdRVSli5XmyrCF1g}{dcGxnI4eRb-2ziF2A6_HtQ}{elastic-host}{10.10.10.10:9300}{ml.machine_memory=8202006528, ml.max_open_jobs=20, ml.enabled=true}] to setup local exporter [default_local] (does it have x-pack installed?)
[2018-06-15T15:29:31,943][INFO ][o.e.x.m.e.l.LocalExporter] waiting for elected master node [{elastic-host}{52xjqnOdRVSli5XmyrCF1g}{dcGxnI4eRb-2ziF2A6_HtQ}{elastic-host}{10.10.10.10:9300}{ml.machine_memory=8202006528, ml.max_open_jobs=20, ml.enabled=true}] to setup local exporter [default_local] (does it have x-pack installed?)

Any particular reason why you don't upgrade your Kibana to match the Elasticsearch version? Kibana 5.6.9 is not compatible with Elasticsearch 6.2.4 , see Support Matrix | Elastic

Thank you for the reply. I'm sorry, I forgot to mention that in the mean time I have updated Kibana to 6.2.4, too.

Now I get the following:

[2018-06-15T15:51:40,793][ERROR][o.e.x.m.e.l.LocalExporter] failed to set monitoring pipeline [xpack_monitoring_2]
org.elasticsearch.ElasticsearchException: java.lang.IllegalArgumentException: cannot execute scripts using [ingest] context
[2018-06-15T15:51:51,236][ERROR][o.e.x.m.e.l.LocalExporter] failed to set monitoring pipeline [xpack_monitoring_2]
org.elasticsearch.ElasticsearchException: java.lang.IllegalArgumentException: cannot execute scripts using [ingest] context

And so on.

So I believe as you have suggested most errors were fixed by the upgrade, but this one still persists.

Okay. Now I see that there's an overall problem in my whole stack regarding monitoring. This is very annoying, to say the least. Everything was working perfect back in 5.2.0. I upgraded to 5.6.9 first because I expected the upgrade assistant to tell me about these problems in my configuration, but oh boy I was mistaken.

But back to the matter: I've got the feeling based on this post [Ingest Node] pipeline with id [x] does not exist (solved) - #15 by mbje-saxo that maybe if I set up a Logstash pipeline then my issue will go away.

But it didn't. I get errors regarding monitoring in the Logstash logs, too, like the following:

[2018-06-19T09:21:38,157][ERROR][logstash.inputs.metrics ] Failed to create monitoring event {:message=>"undefined method ephemeral_id' for nil:NilClass", :error=>"NoMethodError", :backtrace=>["/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/x-pack-6.2.4-java/lib/monitoring/inputs/metrics/stats_event_factory.rb:124:in fetch_node_stats'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/x-pack-6.2.4-java/lib/monitoring/inputs/metrics/stats_event_factory.rb:29:in make'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/x-pack-6.2.4-java/lib/monitoring/inputs/metrics.rb:126:in update_stats'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/x-pack-6.2.4-java/lib/monitoring/inputs/metrics.rb:117:in block in update'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/x-pack-6.2.4-java/lib/license_checker/licensed.rb:76:in with_license_check'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/x-pack-6.2.4-java/lib/monitoring/inputs/metrics.rb:116:in update'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/x-pack-6.2.4-java/lib/monitoring/inputs/metrics.rb:83:in block in configure_snapshot_poller'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/concurrent-ruby-1.0.5-java/lib/concurrent/executor/safe_task_executor.rb:24:in block in execute'", "com/concurrent_ruby/ext/SynchronizationLibrary.java:222:in synchronize'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/concurrent-ruby-1.0.5-java/lib/concurrent/executor/safe_task_executor.rb:19:in execute'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/concurrent-ruby-1.0.5-java/lib/concurrent/timer_task.rb:309:in execute_task'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/concurrent-ruby-1.0.5-java/lib/concurrent/executor/safe_task_executor.rb:24:in block in execute'", "com/concurrent_ruby/ext/SynchronizationLibrary.java:222:in synchronize'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/concurrent-ruby-1.0.5-java/lib/concurrent/executor/safe_task_executor.rb:19:in execute'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/concurrent-ruby-1.0.5-java/lib/concurrent/ivar.rb:170:in safe_execute'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/concurrent-ruby-1.0.5-java/lib/concurrent/scheduled_task.rb:285:in process_task'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/concurrent-ruby-1.0.5-java/lib/concurrent/executor/timer_set.rb:168:in block in process_tasks'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/concurrent-ruby-1.0.5-java/lib/concurrent/executor/java_executor_service.rb:94:in `run'"]}

[2018-06-19T09:30:02,141][ERROR][logstash.outputs.elasticsearch] Encountered a retryable error. Will Retry with exponential backoff {:code=>500, :url=>"https://elastic-host:9200/_xpack/monitoring/_bulk?system_id=logstash&system_api_version=2&interval=1s", :body=>"{"took":60009,"errors":true,"error":{"type":"export_exception","reason":"Exception when closing export bulk","caused_by":{"type":"export_exception","reason":"failed to flush export bulks","caused_by":{"type":"export_exception","reason":"bulk [default_local] reports failures when exporting documents","exceptions":[{"type":"export_exception","reason":"UnavailableShardsException[[.monitoring-logstash-6-2018.06.19][0] primary shard is not active Timeout: [1m], request: [BulkShardRequest [[.monitoring-logstash-6-2018.06.19][0]] containing [2] requests]]","caused_by":{"type":"unavailable_shards_exception","reason":"[.monitoring-logstash-6-2018.06.19][0] primary shard is not active Timeout: [1m], request: [BulkShardRequest [[.monitoring-logstash-6-2018.06.19][0]] containing [2] requests]"}},{"type":"export_exception","reason":"UnavailableShardsException[[.monitoring-logstash-6-2018.06.19][0] primary shard is not active Timeout: [1m], request: [BulkShardRequest [[.monitoring-logstash-6-2018.06.19][0]] containing [2] requests]]","caused_by":{"type":"unavailable_shards_exception","reason":"[.monitoring-logstash-6-2018.06.19][0] primary shard is not active Timeout: [1m], request: [BulkShardRequest [[.monitoring-logstash-6-2018.06.19][0]] containing [2] requests]"}}]}}}}"}

Sometimes I can make something show up in Kibana for Kibana itself and for one of the Elastic nodes out of the two, but most of the time I just stare at the empty page, telling me there's no monitoring data.

Now I tried to delete all monitoring indices in case the error occurs because of some version issue:

DELETE .monitoring-*

But all I get is the following error in the red bar in Kibana:

Monitoring: Error 503 Service Unavailable: [search_phase_execution_exception] all shards failed: Check the Elasticsearch Monitoring cluster network connection or the load level of the nodes.

Console log:

POST https://tasm-cfn-bar.khb.hu/api/monitoring/v1/clusters 503 (Service Unavailable) vendors.bundle.js?v=16627:116

Elasticsearch log (on the master node):

[2018-06-19T10:29:54,457][DEBUG][o.e.a.s.TransportSearchAction] [elastic-host] All shards failed for phase: [query]
[2018-06-19T10:29:54,457][WARN ][r.suppressed ] path: /%3A.monitoring-logstash-2-%2C*%3A.monitoring-logstash-6-%2C.monitoring-logstash-2-%2C.monitoring-logstash-6-/_search, params: {size=0, ignore_unavailable=true, index=:.monitoring-logstash-2-,:.monitoring-logstash-6-,.monitoring-logstash-2-,.monitoring-logstash-6-*}

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.