Version 5.4.1 Separate Monitoring Cluster Error: "Monitoring: Error 400 Bad Request: Cannot create property 'type' on string 'green'"


I got separate monitoring cluster. When updating that cluster to version 5.4.1 I got following error
Monitoring: Error 400 Bad Request: Cannot read property 'type' of null

This version does work if I use local cluster for monitoring.
Main landing page of the monitoring app:

When I try to click on Nodes section

This is the log

My setup used to work fine on older versions of kibana and elk

Hi @luke_smoron

A couple of questions:

My setup used to work fine on older versions of kibana and elk

What version were you using before?
Can you share your setup details?


Version 2.3.5 we also did initial setup for 5.4.0 (did not fully tested that one since we decided to switch to 5.4.1)

We run our clusters in aws. The main cluster talks to the monitoring elastic using following configuration:
network.bind_host: ec2
discovery.zen.hosts_provider: ec2 false < name >
discovery.ec2.tag.stage= < name>
xpack.monitoring.exporters.id1.type=http< IP>:9200

the monitoring cluster is basically a single node elastic instance with mostly out of box configuration false
xpack.monitoring.enabled: false
We had to disable monitoring on it since basic x-pack license only allows monitoring single cluster. I wonder if this might be the issue. If we enable monitoring on monitoring cluster x-pack is able to show monitoring info for the monitoring cluster but it refuses to show monitoring for main cluster due to license.

This is going to sound crazy, but do you happen to have an index named nodes that would have stats in the monitoring data? There is a known bug that will be fixed very very soon. Your issue looks like the symptom of that bug.

Yes there was such index on the main cluster

curl < ip>:9200/_cat/indices
green open development_2 bG8hK0a0Sg6iTOpwMGDVww 10 1 0 0 3.1kb 1.5kb
green open development_1 BU-PTAvlQwWrmmqbn6F9rQ 10 1 3330110 451514 69.7gb 34.8gb
green open system_test_feb_21 pXtebrcYSiWyIB83YAUlAQ 10 1 3099439 1181424 53.7gb 26.8gb
green open it_test akd8oXl1Q4yyHlC-0xHw8Q 1 0 15 0 40.5kb 40.5kb
green open nodes qBAOcd-GSV-RyoMAdZsw_Q 5 1 1 0 7.7kb 3.8kb
green open development 1WBrmAUESwqTC8xFDJ2k7g 10 1 451 19 28.1mb 14mb
green open integration_v2 G3rWLkGwTZafEoLPdPTn4A 10 2 75366 11477 4.4gb 1.4gb

curl < ip>:9200/nodes

After removing it and cleaning monitoring cluster everything started to work fine.

Thanks :grinning:

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.