Errors after updating to 6.2.4

Okay. Now I see that there's an overall problem in my whole stack regarding monitoring. This is very annoying, to say the least. Everything was working perfect back in 5.2.0. I upgraded to 5.6.9 first because I expected the upgrade assistant to tell me about these problems in my configuration, but oh boy I was mistaken.

But back to the matter: I've got the feeling based on this post [Ingest Node] pipeline with id [x] does not exist (solved) - #15 by mbje-saxo that maybe if I set up a Logstash pipeline then my issue will go away.

But it didn't. I get errors regarding monitoring in the Logstash logs, too, like the following:

[2018-06-19T09:21:38,157][ERROR][logstash.inputs.metrics ] Failed to create monitoring event {:message=>"undefined method ephemeral_id' for nil:NilClass", :error=>"NoMethodError", :backtrace=>["/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/x-pack-6.2.4-java/lib/monitoring/inputs/metrics/stats_event_factory.rb:124:in fetch_node_stats'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/x-pack-6.2.4-java/lib/monitoring/inputs/metrics/stats_event_factory.rb:29:in make'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/x-pack-6.2.4-java/lib/monitoring/inputs/metrics.rb:126:in update_stats'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/x-pack-6.2.4-java/lib/monitoring/inputs/metrics.rb:117:in block in update'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/x-pack-6.2.4-java/lib/license_checker/licensed.rb:76:in with_license_check'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/x-pack-6.2.4-java/lib/monitoring/inputs/metrics.rb:116:in update'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/x-pack-6.2.4-java/lib/monitoring/inputs/metrics.rb:83:in block in configure_snapshot_poller'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/concurrent-ruby-1.0.5-java/lib/concurrent/executor/safe_task_executor.rb:24:in block in execute'", "com/concurrent_ruby/ext/SynchronizationLibrary.java:222:in synchronize'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/concurrent-ruby-1.0.5-java/lib/concurrent/executor/safe_task_executor.rb:19:in execute'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/concurrent-ruby-1.0.5-java/lib/concurrent/timer_task.rb:309:in execute_task'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/concurrent-ruby-1.0.5-java/lib/concurrent/executor/safe_task_executor.rb:24:in block in execute'", "com/concurrent_ruby/ext/SynchronizationLibrary.java:222:in synchronize'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/concurrent-ruby-1.0.5-java/lib/concurrent/executor/safe_task_executor.rb:19:in execute'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/concurrent-ruby-1.0.5-java/lib/concurrent/ivar.rb:170:in safe_execute'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/concurrent-ruby-1.0.5-java/lib/concurrent/scheduled_task.rb:285:in process_task'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/concurrent-ruby-1.0.5-java/lib/concurrent/executor/timer_set.rb:168:in block in process_tasks'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/concurrent-ruby-1.0.5-java/lib/concurrent/executor/java_executor_service.rb:94:in `run'"]}

[2018-06-19T09:30:02,141][ERROR][logstash.outputs.elasticsearch] Encountered a retryable error. Will Retry with exponential backoff {:code=>500, :url=>"https://elastic-host:9200/_xpack/monitoring/_bulk?system_id=logstash&system_api_version=2&interval=1s", :body=>"{"took":60009,"errors":true,"error":{"type":"export_exception","reason":"Exception when closing export bulk","caused_by":{"type":"export_exception","reason":"failed to flush export bulks","caused_by":{"type":"export_exception","reason":"bulk [default_local] reports failures when exporting documents","exceptions":[{"type":"export_exception","reason":"UnavailableShardsException[[.monitoring-logstash-6-2018.06.19][0] primary shard is not active Timeout: [1m], request: [BulkShardRequest [[.monitoring-logstash-6-2018.06.19][0]] containing [2] requests]]","caused_by":{"type":"unavailable_shards_exception","reason":"[.monitoring-logstash-6-2018.06.19][0] primary shard is not active Timeout: [1m], request: [BulkShardRequest [[.monitoring-logstash-6-2018.06.19][0]] containing [2] requests]"}},{"type":"export_exception","reason":"UnavailableShardsException[[.monitoring-logstash-6-2018.06.19][0] primary shard is not active Timeout: [1m], request: [BulkShardRequest [[.monitoring-logstash-6-2018.06.19][0]] containing [2] requests]]","caused_by":{"type":"unavailable_shards_exception","reason":"[.monitoring-logstash-6-2018.06.19][0] primary shard is not active Timeout: [1m], request: [BulkShardRequest [[.monitoring-logstash-6-2018.06.19][0]] containing [2] requests]"}}]}}}}"}

Sometimes I can make something show up in Kibana for Kibana itself and for one of the Elastic nodes out of the two, but most of the time I just stare at the empty page, telling me there's no monitoring data.