It looks like you have instances that aren't connected to an Elasticsearch cluster

Hello World!

per the above screenshot, it appears like there are 2 (two) clusters, in reality there is only 1 (one)

the "standalone cluster" is using a separate set of metricbeat vs the "elasticsearch" cluster, and what's even more odd is that some of the instances from "standalone cluster" metricbeat made into both clusters, even though there is no difference in between each of those metricbeat configurations..

Please advise!

Hi,

Have you already tried setting monitoring.cluster_uuid in the Beats config?

Best regards
Wolfram

i did try setting up monitoring.cluster_uuid, unfortunately still getting same behavior (2 clusters: 1st actual elasticsearch cluster and the 2nd is "standalone cluster")

The monitoring data is stored in .es-monitoring index (not sure about datastream name for it), you after configuring UUID, you need to wait till the time last monitoring index containing both cluster information is archived and deleted.

GET /_cat/indices/.es-monitoring

{
  "error" : {
    "root_cause" : [
      {
        "type" : "index_not_found_exception",
        "reason" : "no such index [.es-monitoring]",
        "index_uuid" : "_na_",
        "resource.type" : "index_or_alias",
        "resource.id" : ".es-monitoring",
        "index" : ".es-monitoring"
      }
    ],
    "type" : "index_not_found_exception",
    "reason" : "no such index [.es-monitoring]",
    "index_uuid" : "_na_",
    "resource.type" : "index_or_alias",
    "resource.id" : ".es-monitoring",
    "index" : ".es-monitoring"
  },
  "status" : 404
}

yet

GET /_cat/indices/.monitoring-*

there are bunch of indices, which i have deleted in the past to clear everything that have been stored previously, however that did not do the trick as i still get prompted to select cluster every time i go into monitoring app in kibana.

i even verified that cluster uuid exists in the actual event and there is no other value other then my cluster_uuid..

possibly also worth to note this as well:

not all of logstash instances and logstash pipelines that made into monitoring.cluster_uuid specified cluster, made into "standard cluster". I find that odd, as all metricbeat instances are running exactly same configuration..

I had this for years, across various versions, never got it fixed. Since everything shows in the proper cluster, mine couldn't have been missing UUID's. I just ignored it....

i guess i could ignore it but ...

... call me :crazy_face: but it's one extra click for moi :wink: and it's an old :beetle: for elastic to squeeze :triumph: and especially now it's been there for years, across various of versions)