migrated from 7.8 and no third party components. Turning on verbose gave a flood of data but most of it looked like:
{"type":"log","@timestamp":"2020-09-01T14:40:49Z","tags":["debug","metrics"],"pid":5566,"message":"Refreshing metrics"}
{"type":"ops","@timestamp":"2020-09-01T14:40:49Z","tags":[],"pid":5566,"os":{"load":[0.0595703125,0.3544921875,0.23193359375],"mem":{"total":67387469824,"free":33859088384},"uptime":16737118},"proc":{"uptime":214.872,"mem":{"rss":545177600,"heapTotal":479436800,"heapUsed":351743144,"external":3280407},"delay":1.5424118041992188},"load":{"requests":{},"responseTimes":{},"sockets":{"http":{"total":0},"https":{"total":0}}},"message":"memory: 335.4MB uptime: 0:03:35 load: [0.06 0.35 0.23] delay: 1.542"}
{"type":"log","@timestamp":"2020-09-01T14:40:52Z","tags":["debug","plugins","taskManager","taskManager"],"pid":5566,"message":"Running task endpoint:user-artifact-packager \"endpoint:user-artifact-packager:1.0.0\""}
{"type":"log","@timestamp":"2020-09-01T14:40:52Z","tags":["debug","plugins","securitySolution","endpoint:user-artifact-packager:1","0","0"],"pid":5566,"message":"User manifest not available yet."}
{"type":"log","@timestamp":"2020-09-01T14:40:54Z","tags":["debug","metrics"],"pid":5566,"message":"Refreshing metrics"}
{"type":"ops","@timestamp":"2020-09-01T14:40:54Z","tags":[],"pid":5566,"os":{"load":[0.0546875,0.3486328125,0.23046875],"mem":{"total":67387469824,"free":33859035136},"uptime":16737123},"proc":{"uptime":219.874,"mem":{"rss":545153024,"heapTotal":479436800,"heapUsed":351927008,"external":3198097},"delay":1.775705337524414},"load":{"requests":{},"responseTimes":{},"sockets":{"http":{"total":0},"https":{"total":0}}},"message":"memory: 335.6MB uptime: 0:03:40 load: [0.05 0.35 0.23] delay: 1.776"}
It has made the GUI drag. Takes minutes to go from one screen to another.
I then set logging to QUIET and found:
{"type":"log","@timestamp":"2020-09-02T11:45:32Z","tags":["error","http"],"pid":4137,"message":"{ FetchError: request to https://telemetry.elastic.co/opt_in_status/v2/send failed, reason: connect ETIMEDOUT 99.84.251.54:443\n at ClientRequest.<anonymous> (/usr/share/kibana/node_modules/node-fetch/index.js:133:11)\n at ClientRequest.emit (events.js:198:13)\n at TLSSocket.socketErrorListener (_http_client.js:401:9)\n at TLSSocket.emit (events.js:198:13)\n at emitErrorNT (internal/streams/destroy.js:91:8)\n at emitErrorAndCloseNT (internal/streams/destroy.js:59:3)\n at process._tickCallback (internal/process/next_tick.js:63:19)\n name: 'FetchError',\n message:\n 'request to https://telemetry.elastic.co/opt_in_status/v2/send failed, reason: connect ETIMEDOUT 99.84.251.54:443',\n type: 'system',\n errno: 'ETIMEDOUT',\n code: 'ETIMEDOUT' }"}
{"type":"log","@timestamp":"2020-09-02T11:47:05Z","tags":["error","plugins","ingestManager"],"pid":4137,"message":"Error connecting to package registry at https://epr-7-9.elastic.co/search?package=endpoint&internal=true&experimental=true&kibana.version=7.9.0: request to https://epr-7-9.elastic.co/search?package=endpoint&internal=true&experimental=true&kibana.version=7.9.0 failed, reason: connect ETIMEDOUT 151.101.2.217:443"}
{"type":"log","@timestamp":"2020-09-02T11:47:05Z","tags":["error","plugins","ingestManager"],"pid":4137,"message":"Error connecting to package registry at https://epr-7-9.elastic.co/search?package=endpoint&internal=true&experimental=true&kibana.version=7.9.0: request to https://epr-7-9.elastic.co/search?package=endpoint&internal=true&experimental=true&kibana.version=7.9.0 failed, reason: connect ETIMEDOUT 151.101.2.217:443"}
So I guess 7.9 tries to reach out to the Internet so that if your machines have no Internet connection this will cause issues as well. Im not sure if this is what is causing the original 'fetch' problem though.