Kibana patches from 8.5.3 to 8.6.1

Hello guys,
Need help while upgrading kibana patches from 8.5.3 to 8.6.1. not able to connect a node on browser.
I update a node by apt update -y & apy upgrade -y, reboot it, try to make it out from the cluster but got no luck.
Please assist.

Welcome! Do you have a particular error you are seeing when you attempt the upgrade?

image
Only this is the only issue.
Should I share the Kibana logs.?

You need to share the logs.

You upgraded Elasticsearch as well, right?

1 Like


Yellow highlighted are the my nodes.
This log is one of the node's kibana log.


After executing the cmds which i mention above on all my Nodes this 8.5.3 will be 8.6.1

What cmds?

From the log you shared you didn't updated Elastcsearch, only Kibana, this does not work, you need to update first your Elasticsearch Cluster and then update Kibana.

apt update -y
apt upgrade -y
Executing above cmds on all nodes, It will also update Elasticsearch..
Even I tries all the patches one by one by apt list --upgradable & update them individually like
apt install filebeat
apt install metricbeat & so on but got no luck.

The Kibana log you shared shows that it is trying to connect to an incompatible Elasticsearch version, it shows that your Elasticsearch cluster where on version 8.5.3 and your Kibana was on version 8.6.1.

Kibana version and Elasticsearch version needs to be the same.

Did you upgrade your Elasticsearch cluster? it is not clear.

Is your kibana still giving you the same issue? Please share the logs as plain text, not as a screenshot, copy the log text and share it using the Preformatted text button, the </> button.

that is wrong way to do it.

upgrade elasticsearch package alone. you don't have to use apt update -y

then once elasticseach is upgraded up and running then update kibana

if kibana see older version of elastic it won't start just like @leandrojmp said

both are still on same version , root@ekb01:~# apt install elasticsearch
Reading package lists... Done
Building dependency tree
Reading state information... Done
elasticsearch is already the newest version (8.6.1).
The following packages were automatically installed and are no longer required:
libfwupdplugin1 libxmlb1
Use 'apt autoremove' to remove them.
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
root@ekb01:~# apt install kibana
Reading package lists... Done
Building dependency tree
Reading state information... Done
kibana is already the newest version (8.6.1).
The following packages were automatically installed and are no longer required:
libfwupdplugin1 libxmlb1
Use 'apt autoremove' to remove them.
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.

still we are not able to get the webportal working . What can be the reason for the same.

You need to provide logs from both Kibana and Elasticsearch, if they are both in the same version your issue is different now.

<root@ekb02:/var/log/elasticsearch# apt list --upgradable
Listing... Done
elasticsearch/stable 8.6.1 amd64 [upgradable from: 8.6.0]
filebeat/stable 8.6.1 amd64 [upgradable from: 8.6.0]
kibana/stable 8.6.1 amd64 [upgradable from: 8.5.3]
libnl-3-200/focal-updates 3.4.0-1ubuntu0.1 amd64 [upgradable from: 3.4.0-1]
libnl-genl-3-200/focal-updates 3.4.0-1ubuntu0.1 amd64 [upgradable from: 3.4.0-1]
metricbeat/stable 8.6.1 amd64 [upgradable from: 8.6.0]
python3-software-properties/focal-updates 0.99.9.10 all [upgradable from: 0.99.9.8]
snapd/focal-updates 2.58+20.04 amd64 [upgradable from: 2.57.5+20.04ubuntu0.1]
software-properties-common/focal-updates 0.99.9.10 all [upgradable from: 0.99.9.8]
ubuntu-advantage-tools/focal-updates 27.13.2~20.04.1 amd64 [upgradable from: 27.12~20.04.1] >

If I check for the updated on any of my node this is the outcome, It clearly shows that Elasticsearch version is different as compare to kibana.
Now Should I firstly update Elasticsearch or should i upgrade all of my nodes one by one & then see , If it is due to version mismatch it should be resolve...
Should I start with the master nodes firstly then move to slave nodes?

Your issue is still the same, your kibana is on 8.5.3 and your elasticsearch seems to be on 8.6.0.

You need Elasticsearch and Kibana to be on the same version or else Kibana won't work.

1 Like

Hi Team

Please find the pre upgrade log from our end

  1. Elastic search
  2. Kibana
    Kibana Logs

OR","logger":"plugins.taskManager"},"process":{"pid":1042},"trace":{"id":"da84962d505880ab7e1084f2a4477f60"},"transaction":{"id":"7779e0b82c4268f2"}}
{"service":{"node":{"roles":["background_tasks","ui"]}},"ecs":{"version":"8.4.0"},"@timestamp":"2023-01-31T17:45:32.391+05:30","message":"Failed to poll for work: [parent] Data too large, data for [<http_request>] would be [6131076392/5.7gb], which is larger than the limit of [5866153574/5.4gb], real usage: [6131065344/5.7gb], new bytes reserved: [11048/10.7kb], usages [eql_sequence=0/0b, model_inference=0/0b, inflight_requests=11048/10.7kb, request=0/0b, fielddata=40903/39.9kb]","log":{"level":"ERROR","logger":"plugins.taskManager"},"process":{"pid":1042},"trace":{"id":"1aada70bcc15032301bc1a38b75ec6cd"},"transaction":{"id":"b30eca8dd7094bf3"}}
{"service":{"node":{"roles":["background_tasks","ui"]}},"ecs":{"version":"8.4.0"},"@timestamp":"2023-01-31T18:03:41.246+05:30","message":"Failed to poll for work: [parent] Data too large, data for [<http_request>] would be [5882959850/5.4gb], which is larger than the limit of [5866153574/5.4gb], real usage: [5882948728/5.4gb], new bytes reserved: [11122/10.8kb], usages [eql_sequence=0/0b, model_inference=0/0b, inflight_requests=1728110/1.6mb, request=0/0b, fielddata=40903/39.9kb]","log":{"level":"ERROR","logger":"plugins.taskManager"},"process":{"pid":1042},"trace":{"id":"3e0193e47702665912314809ede61bb4"},"transaction":{"id":"87eabf6af767d912"}}
{"service":{"node":{"roles":["background_tasks","ui"]}},"ecs":{"version":"8.4.0"},"@timestamp":"2023-01-31T20:45:12.217+05:30","message":"Failed to poll for work: [parent] Data too large, data for [indices:data/read/search[phase/query+fetch/scroll]] would be [5881259150/5.4gb], which is larger than the limit of [5866153574/5.4gb], real usage: [5881259008/5.4gb], new bytes reserved: [142/142b], usages [eql_sequence=0/0b, fielddata=45427/44.3kb, request=32768/32kb, inflight_requests=14726684/14mb, model_inference=0/0b],[parent] Data too large, data for [indices:data/read/search[phase/query+fetch/scroll]] would be [5881259150/5.4gb], which is larger than the limit of [5866153574/5.4gb], real usage: [5881259008/5.4gb], new bytes reserved: [142/142b], usages [eql_sequence=0/0b, fielddata=45427/44.3kb, request=32768/32kb, inflight_requests=14726684/14mb, model_inference=0/0b]","log":{"level":"ERROR","logger":"plugins.taskManager"},"process":{"pid":1042},"trace":{"id":"d37764df8efc04d088bda68b765dd642"},"transaction":{"id":"d34e108c97d61be7"}}
{"service":{"node":{"roles":["background_tasks","ui"]}},"ecs":{"version":"8.4.0"},"@timestamp":"2023-01-31T21:50:20.359+05:30","message":"Failed to poll for work: [parent] Data too large, data for [indices:data/read/search[phase/query+fetch/scroll]] would be [5870099934/5.4gb], which is larger than the limit of [5866153574/5.4gb], real usage: [5870099792/5.4gb], new bytes reserved: [142/142b], usages [eql_sequence=0/0b, fielddata=45616/44.5kb, request=81920/80kb, inflight_requests=626748/612kb, model_inference=0/0b],[parent] Data too large, data for [indices:data/read/search[phase/query+fetch/scroll]] would be [5870099934/5.4gb], which is larger than the limit of [5866153574/5.4gb], real usage: [5870099792/5.4gb], new bytes reserved: [142/142b], usages [eql_sequence=0/0b, fielddata=45616/44.5kb, request=81920/80kb, inflight_requests=626748/612kb, model_inference=0/0b]","log":{"level":"ERROR","logger":"plugins.taskManager"},"process":{"pid":1042},"trace":{"id":"7ff8cc4406a3537c5418a70ad7bda108"},"transaction":{"id":"140bd32bd658b51b"}}
root@ekb02:/var/log/kibana#

Elasticsearch Logs

[2023-01-31T22:19:46,891][WARN ][o.e.h.AbstractHttpServerTransport] [ekb02.d2h.com] caught exception while handling client http traffic, closing connection Netty4HttpChannel{localAddress=/10.95.5.216:9200, remoteAddress=/172.19.16.28:49167}
io.netty.handler.codec.DecoderException: javax.net.ssl.SSLHandshakeException: Received fatal alert: bad_certificate
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:489) ~[?:?]
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:280) ~[?:?]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) ~[?:?]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) ~[?:?]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) ~[?:?]
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) ~[?:?]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) ~[?:?]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) ~[?:?]
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) ~[?:?]
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) ~[?:?]
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) ~[?:?]
at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:689) ~[?:?]
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:652) ~[?:?]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) ~[?:?]
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) ~[?:?]
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) ~[?:?]
at java.lang.Thread.run(Thread.java:1589) ~[?:?]
Caused by: javax.net.ssl.SSLHandshakeException: Received fatal alert: bad_certificate
at sun.security.ssl.Alert.createSSLException(Alert.java:130) ~[?:?]
at sun.security.ssl.Alert.createSSLException(Alert.java:117) ~[?:?]
at sun.security.ssl.TransportContext.fatal(TransportContext.java:358) ~[?:?]
at sun.security.ssl.Alert$AlertConsumer.consume(Alert.java:286) ~[?:?]
at sun.security.ssl.TransportContext.dispatch(TransportContext.java:204) ~[?:?]
at sun.security.ssl.SSLTransport.decode(SSLTransport.java:172) ~[?:?]
at sun.security.ssl.SSLEngineImpl.decode(SSLEngineImpl.java:736) ~[?:?]
at sun.security.ssl.SSLEngineImpl.readRecord(SSLEngineImpl.java:691) ~[?:?]
at sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:506) ~[?:?]
at sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:482) ~[?:?]
at javax.net.ssl.SSLEngine.unwrap(SSLEngine.java:679) ~[?:?]
at io.netty.handler.ssl.SslHandler$SslEngineType$3.unwrap(SslHandler.java:296) ~[?:?]
at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1343) ~[?:?]
at io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1236) ~[?:?]
at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1285) ~[?:?]
at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:519) ~[?:?]
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:458) ~[?:?]
... 16 more
root@ekb02:/var/log/elasticsearch#

Before new packages installation logs.

root@ekb02:/var/log/elasticsearch# apt list --upgradable
Listing... Done
elasticsearch/stable 8.6.1 amd64 [upgradable from: 8.6.0]
filebeat/stable 8.6.1 amd64 [upgradable from: 8.6.0]
kibana/stable 8.6.1 amd64 [upgradable from: 8.5.3]
libnl-3-200/focal-updates 3.4.0-1ubuntu0.1 amd64 [upgradable from: 3.4.0-1]
libnl-genl-3-200/focal-updates 3.4.0-1ubuntu0.1 amd64 [upgradable from: 3.4.0-1]
metricbeat/stable 8.6.1 amd64 [upgradable from: 8.6.0]
python3-software-properties/focal-updates 0.99.9.10 all [upgradable from: 0.99.9.8]
snapd/focal-updates 2.58+20.04 amd64 [upgradable from: 2.57.5+20.04ubuntu0.1]
software-properties-common/focal-updates 0.99.9.10 all [upgradable from: 0.99.9.8]
ubuntu-advantage-tools/focal-updates 27.13.2~20.04.1 amd64 [upgradable from: 27.12~20.04.1]
root@ekb02:/var/log/elasticsearch#

The log after post upgrade .
Kibana Logs

ERROR","logger":"elasticsearch-service"},"process":{"pid":2241972},"trace":{"id":"5606526c757d2d389aee7ef6a1c51ff2"},"transaction":{"id":"45ad45dd23cf6ce4"}}
{"service":{"node":{"roles":["background_tasks","ui"]}},"ecs":{"version":"8.4.0"},"@timestamp":"2023-02-02T11:41:51.591+05:30","message":"Browser executable: /usr/share/kibana/x-pack/plugins/screenshotting/chromium/headless_shell-linux_x64/headless_shell","log":{"level":"INFO","logger":"plugins.screenshotting.chromium"},"process":{"pid":2241972},"trace":{"id":"5606526c757d2d389aee7ef6a1c51ff2"},"transaction":{"id":"45ad45dd23cf6ce4"}}
{"service":{"node":{"roles":["background_tasks","ui"]}},"ecs":{"version":"8.4.0"},"@timestamp":"2023-02-02T12:01:49.315+05:30","message":"Timeout: it took more than 1200000ms","error":{"message":"Timeout: it took more than 1200000ms","type":"Error","stack_trace":"Error: Timeout: it took more than 1200000ms\n at Timeout._onTimeout (/usr/share/kibana/x-pack/plugins/rule_registry/server/rule_data_plugin_service/resource_installer.js:49:20)\n at listOnTimeout (node:internal/timers:559:17)\n at processTimers (node:internal/timers:502:7)"},"log":{"level":"ERROR","logger":"plugins.ruleRegistry"},"process":{"pid":2241972},"trace":{"id":"5606526c757d2d389aee7ef6a1c51ff2"},"transaction":{"id":"45ad45dd23cf6ce4"}}
{"service":{"node":{"roles":["background_tasks","ui"]}},"ecs":{"version":"8.4.0"},"@timestamp":"2023-02-02T12:01:49.339+05:30","message":"Failure installing common resources shared between all indices. Timeout: it took more than 1200000ms","error":{"message":"Failure installing common resources shared between all indices. Timeout: it took more than 1200000ms","type":"Error","stack_trace":"Error: Failure installing common resources shared between all indices. Timeout: it took more than 1200000ms\n at ResourceInstaller.installWithTimeout (/usr/share/kibana/x-pack/plugins/rule_registry/server/rule_data_plugin_service/resource_installer.js:62:13)\n at ResourceInstaller.installCommonResources (/usr/share/kibana/x-pack/plugins/rule_registry/server/rule_data_plugin_service/resource_installer.js:76:5)"},"log":{"level":"ERROR","logger":"plugins.ruleRegistry"},"process":{"pid":2241972},"trace":{"id":"5606526c757d2d389aee7ef6a1c51ff2"},"transaction":{"id":"45ad45dd23cf6ce4"}}
{"service":{"node":{"roles":["background_tasks","ui"]}},"ecs":{"version":"8.4.0"},"@timestamp":"2023-02-02T12:15:09.301+05:30","message":"This version of Kibana (v8.6.1) is incompatible with the following Elasticsearch nodes in your cluster: v8.5.3 @ 10.95.4.238:9200 (10.95.4.238), v8.5.3 @ 10.95.4.226:9200 (10.95.4.226), v8.5.3 @ 10.95.4.251:9200 (10.95.4.251), v8.5.3 @ 10.95.4.240:9200 (10.95.4.240), v8.5.3 @ 10.95.4.228:9200 (10.95.4.228), v8.5.3 @ 10.95.4.138:9200 (10.95.4.138)","log":{"level":"ERROR","logger":"elasticsearch-service"},"process":{"pid":2241972},"trace":{"id":"5606526c757d2d389aee7ef6a1c51ff2"},"transaction":{"id":"45ad45dd23cf6ce4"}}
{"service":{"node":{"roles":["background_tasks","ui"]}},"ecs":{"version":"8.4.0"},"@timestamp":"2023-02-02T12:15:11.836+05:30","message":"This version of Kibana (v8.6.1) is incompatible with the following Elasticsearch nodes in your cluster: v8.5.3 @ 10.95.4.238:9200 (10.95.4.238), v8.5.3 @ 10.95.4.226:9200 (10.95.4.226), v8.5.3 @ 10.95.4.251:9200 (10.95.4.251), v8.5.3 @ 10.95.4.240:9200 (10.95.4.240), v8.5.3 @ 10.95.4.228:9200 (10.95.4.228), v8.5.3 @ 10.95.4.138:9200 (10.95.4.138)","log":{"level":"ERROR","logger":"elasticsearch-service"},"process":{"pid":2241972},"trace":{"id":"5606526c757d2d389aee7ef6a1c51ff2"},"transaction":{"id":"45ad45dd23cf6ce4"}}
{"service":{"node":{"roles":["background_tasks","ui"]}},"ecs":{"version":"8.4.0"},"@timestamp":"2023-02-02T12:15:34.358+05:30","message":"This version of Kibana (v8.6.1) is incompatible with the following Elasticsearch nodes in your cluster: v8.5.3 @ 10.95.4.238:9200 (10.95.4.238), v8.5.3 @ 10.95.4.226:9200 (10.95.4.226), v8.5.3 @ 10.95.4.251:9200 (10.95.4.251), v8.5.3 @ 10.95.4.240:9200 (10.95.4.240), v8.5.3 @ 10.95.4.228:9200 (10.95.4.228), v8.5.3 @ 10.95.4.138:9200 (10.95.4.138)","log":{"level":"ERROR","logger":"elasticsearch-service"},"process":{"pid":2241972},"trace":{"id":"5606526c757d2d389aee7ef6a1c51ff2"},"transaction":{"id":"45ad45dd23cf6ce4"}}
{"service":{"node":{"roles":["background_tasks","ui"]}},"ecs":{"version":"8.4.0"},"@timestamp":"2023-02-02T12:15:36.796+05:30","message":"This version of Kibana (v8.6.1) is incompatible with the following Elasticsearch nodes in your cluster: v8.5.3 @ 10.95.4.238:9200 (10.95.4.238), v8.5.3 @ 10.95.4.226:9200 (10.95.4.226), v8.5.3 @ 10.95.4.251:9200 (10.95.4.251), v8.5.3 @ 10.95.4.240:9200 (10.95.4.240), v8.5.3 @ 10.95.4.228:9200 (10.95.4.228), v8.5.3 @ 10.95.4.138:9200 (10.95.4.138)","log":{"level":"ERROR","logger":"elasticsearch-service"},"process":{"pid":2241972},"trace":{"id":"5606526c757d2d389aee7ef6a1c51ff2"},"transaction":{"id":"45ad45dd23cf6ce4"}}
root@ekb03:/var/log/kibana#

Elasticsearch Logs

[2023-02-02T12:25:25,707][WARN ][o.e.x.s.a.ApiKeyAuthenticator] [ekb03.d2h.com] Authentication using apikey failed - unable to find apikey with id fWvJ4IQB7xMqukCjoJIq
[2023-02-02T12:25:25,742][WARN ][o.e.x.s.a.ApiKeyAuthenticator] [ekb03.d2h.com] Authentication using apikey failed - unable to find apikey with id WLWfyYMBvLWcFSyH1509
[2023-02-02T12:25:25,837][WARN ][o.e.x.s.a.ApiKeyAuthenticator] [ekb03.d2h.com] Authentication using apikey failed - unable to find apikey with id VlwTyIMBvLWcFSyHeFCb
[2023-02-02T12:25:26,106][WARN ][o.e.x.s.a.ApiKeyAuthenticator] [ekb03.d2h.com] Authentication using apikey failed - unable to find apikey with id 92KNP4IBHl3g-3_ZVeij
[2023-02-02T12:25:26,207][WARN ][o.e.x.s.a.ApiKeyAuthenticator] [ekb03.d2h.com] Authentication using apikey failed - unable to find apikey with id nrKdVoQB7xMqukCjFo_I
[2023-02-02T12:25:26,325][WARN ][o.e.x.s.a.ApiKeyAuthenticator] [ekb03.d2h.com] Authentication using apikey failed - unable to find apikey with id KLXgEYIBHl3g-3_ZRrVd
[2023-02-02T12:25:26,363][WARN ][o.e.x.s.a.ApiKeyAuthenticator] [ekb03.d2h.com] Authentication using apikey failed - unable to find apikey with id KLXgEYIBHl3g-3_ZRrVd
[2023-02-02T12:25:26,683][WARN ][o.e.h.AbstractHttpServerTransport] [ekb03.d2h.com] caught exception while handling client http traffic, closing connection Netty4HttpChannel{localAddress=/10.95.5.217:9200, remoteAddress=/172.19.7.24:58314}
io.netty.handler.codec.DecoderException: javax.net.ssl.SSLHandshakeException: Received fatal alert: bad_certificate
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:489) ~[?:?]
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:280) ~[?:?]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) ~[?:?]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) ~[?:?]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) ~[?:?]
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) ~[?:?]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) ~[?:?]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) ~[?:?]
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) ~[?:?]
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) ~[?:?]
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) ~[?:?]
at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:689) ~[?:?]
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:652) ~[?:?]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) ~[?:?]
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) ~[?:?]
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) ~[?:?]
at java.lang.Thread.run(Thread.java:1589) ~[?:?]
Caused by: javax.net.ssl.SSLHandshakeException: Received fatal alert: bad_certificate
at sun.security.ssl.Alert.createSSLException(Alert.java:130) ~[?:?]
at sun.security.ssl.Alert.createSSLException(Alert.java:117) ~[?:?]
at sun.security.ssl.TransportContext.fatal(TransportContext.java:358) ~[?:?]
at sun.security.ssl.Alert$AlertConsumer.consume(Alert.java:286) ~[?:?]
at sun.security.ssl.TransportContext.dispatch(TransportContext.java:204) ~[?:?]
at sun.security.ssl.SSLTransport.decode(SSLTransport.java:172) ~[?:?]
at sun.security.ssl.SSLEngineImpl.decode(SSLEngineImpl.java:736) ~[?:?]
at sun.security.ssl.SSLEngineImpl.readRecord(SSLEngineImpl.java:691) ~[?:?]
at sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:506) ~[?:?]
at sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:482) ~[?:?]
at javax.net.ssl.SSLEngine.unwrap(SSLEngine.java:679) ~[?:?]
at io.netty.handler.ssl.SslHandler$SslEngineType$3.unwrap(SslHandler.java:296) ~[?:?]
at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1343) ~[?:?]
at io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1236) ~[?:?]
at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1285) ~[?:?]
at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:519) ~[?:?]
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:458) ~[?:?]
... 16 more
[2023-02-02T12:25:26,813][INFO ][o.e.i.b.HierarchyCircuitBreakerService] [ekb03.d2h.com] attempting to trigger G1GC due to high heap usage [6014013952]
[2023-02-02T12:25:27,070][INFO ][o.e.i.b.HierarchyCircuitBreakerService] [ekb03.d2h.com] GC did bring memory usage down, before [6014013952], after [2153847040], allocations [584], duration [257]
[2023-02-02T12:25:27,109][WARN ][o.e.x.s.a.ApiKeyAuthenticator] [ekb03.d2h.com] Authentication using apikey failed - unable to find apikey with id UZ1lRIIBHl3g-3_ZsAfo
[2023-02-02T12:25:27,296][WARN ][o.e.x.s.a.ApiKeyAuthenticator] [ekb03.d2h.com] Authentication using apikey failed - unable to find apikey with id aWKNP4IBHl3g-3_ZSuXl
root@ekb03:/var/log/elasticsearch#

After new packages installed logs

root@ekb03:~# apt list --upgradable
Listing... Done
root@ekb03:~# apt update -y
Hit:1 https://artifacts.elastic.co/packages/7.x/apt stable InRelease
Hit:2 https://artifacts.elastic.co/packages/8.x/apt stable InRelease
Hit:3 Index of /ubuntu focal InRelease
Get:4 Index of /ubuntu focal-updates InRelease [114 kB]
Hit:5 https://artifacts.elastic.co/packages/oss-8.x/apt stable InRelease
Get:6 Index of /ubuntu focal-backports InRelease [108 kB]
Get:7 Index of /ubuntu focal-security InRelease [114 kB]
Fetched 336 kB in 2s (197 kB/s)
Reading package lists... Done
Building dependency tree
Reading state information... Done
All packages are up to date.

As @leandrojmp has pointed out before, you are using Kibana and Elasticsearch versions that are not compatible with each other. The new logs still show the same issue.

Both Kibana and Elasticsearch will need to be on v8.6.1. Can you upgrade just Elasticsearch to 8.6.1 as previously suggested? The install docs do cover install steps for different distributions if that helps.

root@ekb03:~# apt list --upgradable
Listing... Done
root@ekb03:~# apt update -y
Hit:1 https://artifacts.elastic.co/packages/7.x/apt stable InRelease
Hit:2 https://artifacts.elastic.co/packages/8.x/apt stable InRelease
Hit:3 Index of /ubuntu focal InRelease
Get:4 Index of /ubuntu focal-updates InRelease [114 kB]
Hit:5 https://artifacts.elastic.co/packages/oss-8.x/apt stable InRelease
Get:6 Index of /ubuntu focal-backports InRelease [108 kB]
Get:7 Index of /ubuntu focal-security InRelease [114 kB]
Fetched 336 kB in 2s (197 kB/s)
Reading package lists... Done
Building dependency tree
Reading state information... Done
All packages are up to date.

But as we see the above code, If we updating both kibana & Elasticsearch still we are not getting the GUI access of any of the node, So we just revert the changes.

I have already change only Elasticsearch to 8.6.1 , then able to access node on GUI but while updating Kibana we r facing this challenge of non availability of Node through GUI..

@ArpitChoudhary , the post-upgrade logs shared by @Sijit_Nair that I've referenced in the above quote show some nodes still on v8.5.3.

Hi Carly

Is their any command or way to check why of the host is on older version .
we are try to find out but not able to do it .

Regards
SN