6.2.4 to 6.3 upgrade broke kibana monitoring

After upgrading my 3-nodes cluster from 6.2.4 to 6.3 using the rolling update documentation page, the kibana monitoring graphs no longer get filled with data.
I had X-pack free version prior to the upgrade.

Also, even after doing a full cluster shutdown of the 3 ES nodes + kibana and re-opening it, the main page still displays weird data :

See how it writes version 6.2.4

The search functionalities are working perfectly well, _cat/health is at 100%

Here are the ES startup logs. The last line seems like an error to me but I'm not sure what it's supposed to mean so I'm asking here. Please tell me what I did wrong.

[2018-06-14T11:48:54,342][INFO ][o.e.n.Node               ] [es-node1] initializing ...
[2018-06-14T11:48:54,418][INFO ][o.e.e.NodeEnvironment    ] [es-node1] using [1] data paths, mounts [[New Volume (D:)]], net usable_space [16.5gb], net total_space [29.9gb], types [NTFS]
[2018-06-14T11:48:54,419][INFO ][o.e.e.NodeEnvironment    ] [es-node1] heap size [3.9gb], compressed ordinary object pointers [true]
[2018-06-14T11:48:55,001][INFO ][o.e.n.Node               ] [es-node1] node name [es-node1], node ID [ibDeweUNS_aQCQA5WEHD5Q]
[2018-06-14T11:48:55,002][INFO ][o.e.n.Node               ] [es-node1] version[6.3.0], pid[11616], build[unknown/unknown/424e937/2018-06-11T23:38:03.357887Z], OS[Windows Server 2012 R2/6.3/amd64], JVM[Oracle Corporation/Java HotSpot(TM) 64-Bit Server VM/1.8.0_144/25.144-b01]
[2018-06-14T11:48:55,002][INFO ][o.e.n.Node               ] [es-node1] JVM arguments [-Xms4g, -Xmx4g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.io.tmpdir=C:\Users\usrname\AppData\Local\Temp\3\elasticsearch, -XX:+HeapDumpOnOutOfMemoryError, -XX:+PrintGCDetails, -XX:+PrintGCDateStamps, -XX:+PrintTenuringDistribution, -XX:+PrintGCApplicationStoppedTime, -Xloggc:logs/gc.log, -XX:+UseGCLogFileRotation, -XX:NumberOfGCLogFiles=32, -XX:GCLogFileSize=64m, -Delasticsearch, -Des.path.home=D:\ElasticSearch_6_2_4_node1, -Des.path.conf=D:\ElasticSearch_6_2_4_node1\config, exit, -Xms4096m, -Xmx4096m, -Xss4096k]
[2018-06-14T11:48:57,717][INFO ][o.e.p.PluginsService     ] [es-node1] loaded module [aggs-matrix-stats]
 [SNIPPED SOME MODULE LOADING AS THERE WAS TOO MANY CHARACTERS TO POST THIS LOG]
[2018-06-14T11:48:57,718][INFO ][o.e.p.PluginsService     ] [es-node1] loaded module [transport-netty4]
[2018-06-14T11:48:57,718][INFO ][o.e.p.PluginsService     ] [es-node1] loaded module [tribe]
[2018-06-14T11:48:57,718][INFO ][o.e.p.PluginsService     ] [es-node1] loaded module [x-pack-core]
[2018-06-14T11:48:57,718][INFO ][o.e.p.PluginsService     ] [es-node1] loaded module [x-pack-deprecation]
[2018-06-14T11:48:57,718][INFO ][o.e.p.PluginsService     ] [es-node1] loaded module [x-pack-graph]
[2018-06-14T11:48:57,718][INFO ][o.e.p.PluginsService     ] [es-node1] loaded module [x-pack-logstash]
[2018-06-14T11:48:57,718][INFO ][o.e.p.PluginsService     ] [es-node1] loaded module [x-pack-ml]
[2018-06-14T11:48:57,718][INFO ][o.e.p.PluginsService     ] [es-node1] loaded module [x-pack-monitoring]
[2018-06-14T11:48:57,718][INFO ][o.e.p.PluginsService     ] [es-node1] loaded module [x-pack-rollup]
[2018-06-14T11:48:57,718][INFO ][o.e.p.PluginsService     ] [es-node1] loaded module [x-pack-security]
[2018-06-14T11:48:57,718][INFO ][o.e.p.PluginsService     ] [es-node1] loaded module [x-pack-sql]
[2018-06-14T11:48:57,718][INFO ][o.e.p.PluginsService     ] [es-node1] loaded module [x-pack-upgrade]
[2018-06-14T11:48:57,718][INFO ][o.e.p.PluginsService     ] [es-node1] loaded module [x-pack-watcher]
[2018-06-14T11:48:57,719][INFO ][o.e.p.PluginsService     ] [es-node1] no plugins loaded
[2018-06-14T11:49:01,264][INFO ][o.e.x.s.a.s.FileRolesStore] [es-node1] parsed [0] roles from file [D:\ElasticSearch_6_2_4_node1\config\roles.yml]
[2018-06-14T11:49:01,879][INFO ][o.e.x.m.j.p.l.CppLogMessageHandler] [controller/13824] [Main.cc@109] controller (64 bit): Version 6.3.0 (Build 0f0a34c67965d7) Copyright (c) 2018 Elasticsearch BV
[2018-06-14T11:49:02,255][DEBUG][o.e.a.ActionModule       ] Using REST wrapper from plugin org.elasticsearch.xpack.security.Security
[2018-06-14T11:49:02,784][INFO ][o.e.d.DiscoveryModule    ] [es-node1] using discovery type [zen]
[2018-06-14T11:49:03,622][INFO ][o.e.n.Node               ] [es-node1] initialized
[2018-06-14T11:49:03,622][INFO ][o.e.n.Node               ] [es-node1] starting ...
[2018-06-14T11:49:03,871][INFO ][o.e.t.TransportService   ] [es-node1] publish_address {10.12.129.10:9300}, bound_addresses {10.12.129.10:9300}
[2018-06-14T11:49:03,940][INFO ][o.e.b.BootstrapChecks    ] [es-node1] bound or publishing to a non-loopback address, enforcing bootstrap checks
[2018-06-14T11:49:07,220][INFO ][o.e.c.s.ClusterApplierService] [es-node1] detected_master {es-node3}{b1rYtuBgSRCExuxmJ-Vv4Q}{_WS0fglQQCCRVB5gSHngVQ}{10.12.129.20}{10.12.129.20:9301}{ml.machine_memory=34359132160, ml.max_open_jobs=20, xpack.installed=true, ml.enabled=true}, added {{es-node2}{wZJcS1YFTUOaD_H-xj4oEA}{DLDOBNd8SQSwRqAafEqzcw}{10.12.129.11}{10.12.129.11:9300}{ml.machine_memory=51539001344, ml.max_open_jobs=20, xpack.installed=true, ml.enabled=true},{es-node3}{b1rYtuBgSRCExuxmJ-Vv4Q}{_WS0fglQQCCRVB5gSHngVQ}{10.12.129.20}{10.12.129.20:9301}{ml.machine_memory=34359132160, ml.max_open_jobs=20, xpack.installed=true, ml.enabled=true},}, reason: apply cluster state (from master [master {es-node3}{b1rYtuBgSRCExuxmJ-Vv4Q}{_WS0fglQQCCRVB5gSHngVQ}{10.12.129.20}{10.12.129.20:9301}{ml.machine_memory=34359132160, ml.max_open_jobs=20, xpack.installed=true, ml.enabled=true} committed version [44]])
[2018-06-14T11:49:07,804][WARN ][o.e.x.s.a.s.m.NativeRoleMappingStore] [es-node1] Failed to clear cache for realms [[]]
[2018-06-14T11:49:07,806][INFO ][o.e.x.s.a.TokenService   ] [es-node1] refresh keys
[2018-06-14T11:49:08,133][INFO ][o.e.x.s.a.TokenService   ] [es-node1] refreshed keys
[2018-06-14T11:49:08,195][INFO ][o.e.l.LicenseService     ] [es-node1] license [ee858710-5912-470f-bdeb-50ca164ae6dc] mode [basic] - valid
[2018-06-14T11:49:08,225][INFO ][o.e.x.s.t.n.SecurityNetty4HttpServerTransport] [es-node1] publish_address {10.12.129.10:9205}, bound_addresses {10.12.129.10:9205}
[2018-06-14T11:49:08,226][INFO ][o.e.n.Node               ] [es-node1] started
[2018-06-14T11:49:08,715][INFO ][o.e.x.w.WatcherService   ] [es-node1] paused watch execution, reason [new local watcher shard allocation ids], cancelled [0] queued tasks

Since 6.3 X-Pack monitoring must be explicitly enabled. You may see a dialog in Kibana that allows you to turn it on when spinning up a fresh cluster.

On existing clusters, you can (re-)enable monitoring with a persistent setting:

PUT _cluster/settings
{
  "persistent": {
    "xpack.monitoring.collection.enabled": true
  }
}
1 Like

Ahhh you are right.

When you do not have any data to display in the monitoring tab the message shows up.

If there's data it'll be displayed even if the monitoring feature is off :

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.