Kibana - TypeError: Failed to fetch

Hi,
After upgrading to kibana 7.9, I begun getting an error that I cannot find referenced anywhere. I was wondering if anyone has come across this?

TypeError: Failed to fetch
    at Fetch._callee3$ (https://10.23.45.78:5601/33813/bundles/core/core.entry.js:34:108261)
    at l (https://10.23.45.78:5601/33813/bundles/kbn-ui-shared-deps/kbn-ui-shared-deps.js:368:155323)
    at Generator._invoke (https://10.23.45.78:5601/33813/bundles/kbn-ui-shared-deps/kbn-ui-shared-deps.js:368:155076)
    at Generator.forEach.e.<computed> [as throw] (https://10.23.45.78:5601/33813/bundles/kbn-ui-shared-deps/kbn-ui-shared-deps.js:368:155680)
    at fetch_asyncGeneratorStep (https://10.23.45.78:5601/33813/bundles/core/core.entry.js:34:102354)
    at _throw (https://10.23.45.78:5601/33813/bundles/core/core.entry.js:34:102762)

Seems like a main kibana core error, could you please explain a possible step to reproduce it?
Thanks

Hi Markov,
I simply log in and wait a long time as the kibana red line flashes across the top. I cant seem to do anything else in the meantime. Finally, an error appears in the bottom right corner that says:
"An error occurred while trying to set the usage statistics preference."
I click for more information and the posted listing is shown.

Few questions:

  • which version were you migrating from?
  • are you using any third party plugins?
  • is there any meaningful logs from kibana? if not can you try adding logging.verbose option as true in the kibana yml file?

migrated from 7.8 and no third party components. Turning on verbose gave a flood of data but most of it looked like:

{"type":"log","@timestamp":"2020-09-01T14:40:49Z","tags":["debug","metrics"],"pid":5566,"message":"Refreshing metrics"}
{"type":"ops","@timestamp":"2020-09-01T14:40:49Z","tags":[],"pid":5566,"os":{"load":[0.0595703125,0.3544921875,0.23193359375],"mem":{"total":67387469824,"free":33859088384},"uptime":16737118},"proc":{"uptime":214.872,"mem":{"rss":545177600,"heapTotal":479436800,"heapUsed":351743144,"external":3280407},"delay":1.5424118041992188},"load":{"requests":{},"responseTimes":{},"sockets":{"http":{"total":0},"https":{"total":0}}},"message":"memory: 335.4MB uptime: 0:03:35 load: [0.06 0.35 0.23] delay: 1.542"}
{"type":"log","@timestamp":"2020-09-01T14:40:52Z","tags":["debug","plugins","taskManager","taskManager"],"pid":5566,"message":"Running task endpoint:user-artifact-packager \"endpoint:user-artifact-packager:1.0.0\""}
{"type":"log","@timestamp":"2020-09-01T14:40:52Z","tags":["debug","plugins","securitySolution","endpoint:user-artifact-packager:1","0","0"],"pid":5566,"message":"User manifest not available yet."}
{"type":"log","@timestamp":"2020-09-01T14:40:54Z","tags":["debug","metrics"],"pid":5566,"message":"Refreshing metrics"}
{"type":"ops","@timestamp":"2020-09-01T14:40:54Z","tags":[],"pid":5566,"os":{"load":[0.0546875,0.3486328125,0.23046875],"mem":{"total":67387469824,"free":33859035136},"uptime":16737123},"proc":{"uptime":219.874,"mem":{"rss":545153024,"heapTotal":479436800,"heapUsed":351927008,"external":3198097},"delay":1.775705337524414},"load":{"requests":{},"responseTimes":{},"sockets":{"http":{"total":0},"https":{"total":0}}},"message":"memory: 335.6MB uptime: 0:03:40 load: [0.05 0.35 0.23] delay: 1.776"}

It has made the GUI drag. Takes minutes to go from one screen to another.
I then set logging to QUIET and found:

{"type":"log","@timestamp":"2020-09-02T11:45:32Z","tags":["error","http"],"pid":4137,"message":"{ FetchError: request to https://telemetry.elastic.co/opt_in_status/v2/send failed, reason: connect ETIMEDOUT 99.84.251.54:443\n    at ClientRequest.<anonymous> (/usr/share/kibana/node_modules/node-fetch/index.js:133:11)\n    at ClientRequest.emit (events.js:198:13)\n    at TLSSocket.socketErrorListener (_http_client.js:401:9)\n    at TLSSocket.emit (events.js:198:13)\n    at emitErrorNT (internal/streams/destroy.js:91:8)\n    at emitErrorAndCloseNT (internal/streams/destroy.js:59:3)\n    at process._tickCallback (internal/process/next_tick.js:63:19)\n  name: 'FetchError',\n  message:\n   'request to https://telemetry.elastic.co/opt_in_status/v2/send failed, reason: connect ETIMEDOUT 99.84.251.54:443',\n  type: 'system',\n  errno: 'ETIMEDOUT',\n  code: 'ETIMEDOUT' }"}
{"type":"log","@timestamp":"2020-09-02T11:47:05Z","tags":["error","plugins","ingestManager"],"pid":4137,"message":"Error connecting to package registry at https://epr-7-9.elastic.co/search?package=endpoint&internal=true&experimental=true&kibana.version=7.9.0: request to https://epr-7-9.elastic.co/search?package=endpoint&internal=true&experimental=true&kibana.version=7.9.0 failed, reason: connect ETIMEDOUT 151.101.2.217:443"}
{"type":"log","@timestamp":"2020-09-02T11:47:05Z","tags":["error","plugins","ingestManager"],"pid":4137,"message":"Error connecting to package registry at https://epr-7-9.elastic.co/search?package=endpoint&internal=true&experimental=true&kibana.version=7.9.0: request to https://epr-7-9.elastic.co/search?package=endpoint&internal=true&experimental=true&kibana.version=7.9.0 failed, reason: connect ETIMEDOUT 151.101.2.217:443"}

So I guess 7.9 tries to reach out to the Internet so that if your machines have no Internet connection this will cause issues as well. Im not sure if this is what is causing the original 'fetch' problem though.

Hii teen,

I was getting the same error . I disable the telemetry in kibana.yml , now I am not getting the error.

Do this in kibana.yml
telemetry.enabled: false

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.