Upgrading to 7.12 has been one nightmare after another. Been using fleet for "Endpoint" and so far it's been less then pleasant. Got to love Beta.
Unable to login into Kibana with elastic user so you know this is an annoying problem. Works just fine with Curl so it's not fat fingering every password.
A couple errors that repeat hundreds of times:
["warning","plugins","monitoring","monitoring","kibana-monitoring"],"pid":1316,"message":"Unable to bulk upload the stats payload to the local cluster"}
["warning","plugins","monitoring","monitoring","kibana-monitoring"],"pid":1316,"message":"Error: [export_exception] failed to flush export bulks\n at respond (/usr/share/kibana/node_modules/elasticsearch/src/lib/transport.js:349:15)
"message":"Error: [export_exception] failed to flush export bulks\n at respond"
"message":"Unable to bulk upload the stats payload to the local cluster"
At first I took a look here Task Manager troubleshooting | Kibana Guide [master] | Elastic. This really told me already what I expected. The cluster was running fine with 7.11.2 with all the same agent's settings and SIEM rules enabled. In fact I had almost double the rules running on 7.11.2 and it didn't blink an eye. 7.12 has been unstable with extreme CPU using caused by Java on all nodes. I ended up adding 2 more CPU cores per VM and it just ate them as well. Nodes are sitting at 90 to 100% all day which previously was 5% to with 15% when reports where running.
EDIT: To anyone that run's into this issues with 7.12. Check to make sure your Index Lifecycle policy ran! Mine was stopped after the update forcing me to use all disk space which prevents logging in. Fleet log lifecycle the reset where fine.
The lifecycle is set back to default after the update which is longer then what is needed in my case. You'll also need to watch as it's not honoring settings for size and will go the entire day before a roll over. I have it set at 50Gb and several of them where 300+Gb before a roll over.