Internal metric collection ends up in "Standalone Cluster" regardless cluster_uuid setting

Hi,

ES, APM version: 7.2.1
I've seen this before with Filebeat but it wasn't important at the time but now that I added an APM server to the cluster it's frustrating. ES, Kibana, and Logstash are correctly grouped so I figured this will be a beats issue.

I've set up monitoring in apm-server.yml:

apm-server:
...

output.kafka:
...

logging:
...

monitoring:
  enabled: true
  cluster_uuid: "PR0d_CluSteR_Uu1d"
  elasticsearch:
    hosts: [ "host1:9200", "host2:9200"]
    username: "custom_apm_user"
    password: "secure_password"

However, in Kibana --> Stack Monitoring, I get a new "Standalone Cluster" entry:


When I checked the .monitoring-beats-7-2020.07.08 index I saw this:

It seems that it doesn't pass the information at all :frowning:

Is there any workaround without adding any intermediate app (e.g., Logstash)?

Thank you!

Could you please share your Filebeat configuration formatted using </> and maybe debug logs?

Hello Noémi,

Filebeat isn't a part of this equation. APM server sends internal monitoring data directly. If you're referring to this:

I've seen this before with Filebeat but it wasn't important at the time

That was almost a year ago for a development cluster (same version though 7.2.X) and I don't have it anymore.

I can't lower the logging level from info to avoid flooding the logs (ES+APM). If there's something specific that may help out I'll look into it.

This is the full configuration in apm-server.yml:

apm-server:
  host: ":8200"

output.kafka:
  hosts: [...]

  topic: "..."
  partition.round_robin:
    reachable_only: true

  ssl.certificate_authorities: ["..."]
  username: "..."
  password: "..."

  required_acks: -1
  compression: snappy
  max_message_bytes: 1000000

logging.level: info
logging.metrics.enabled: false
logging.to_files: true
logging.to_syslog: false
logging.files:
  path: /var/log/apm-server
  name: apm-server
  keepfiles: 7

monitoring:
  enabled: true
  cluster_uuid: "..."
  elasticsearch:
    hosts: [ "..."]
    username: "..."
    password: "..."
    ssl.certificate_authorities: ["..."]
    ssl.certificate: "..."
    ssl.key: "..."

Hmm. I don't think monitoring.cluster_uuid is available in 7.2.x.

Per https://github.com/elastic/beats/pull/13182, it is first available in 7.3.2.

Aww.... Thank you @chrisronline!

Is there a way to avoid the issue above with the current setup or I need to wait until the cluster will be upgraded?

I have not tested this but you could create a custom Ingest pipeline on your monitoring Elasticsearch cluster. This pipeline would use the set processor to set the cluster_uuid to PR0d_CluSteR_Uu1d. Then you'd reference this pipeline name in the output.elasticsearch.pipeline setting in your apm-server.yml.

There are a couple of options by which you can load your custom Ingest pipeline into Elasticsearch; you can read about them here.

Hope that helps,

Shaunak

1 Like

I'm sorry if I'm mistaken, but according to the monitor configuration, there's no output.elasticsearch.pipeline option. I send the transactions to Kafka and I don't intend to use ES as the ingest node directly.

The ES monitoring cluster needs to have at least one ingest node, as the monitoring ES plugin creates a default ingest pipeline. See https://www.elastic.co/guide/en/elasticsearch/reference/current/collecting-monitoring-data.html

You can create an ingest pipeline on this cluster that can format the documents before indexing. It's not something you configure within Beats, but rather something you configure on the ES monitoring cluster directly.

I'm sorry, but how is that relevant to my issue?

The problem here is your.monitoring-beats-* indices do not contain a valid cluster_uuid field because your output is not Elasticsearch (which is a completely valid setup). We fixed this bug in 7.3.2 and beyond but you aren't able to upgrade to this version to get the fix. The fix involves manually setting the cluster_uuid within the beat yml file.

As a way to fix the issue without needing to upgrade, you can use an ingest pipeline on the monitoring cluster to manually add the proper cluster_uuid to the .monitoring-beats-* documents as they are indexed. Ingest pipelines allow you to perform various actions before a document in indexed.

It does mean you will need to hard-code the proper cluster_uuid in the pipeline itself but it should be a short-term fix that you can safely remove once you are able to upgrade to a version of the stack with a fix.

Does that help?

Thanks @chrisronline !
I understand that now. I was confused by output.elasticsearch.pipeline from @shaunak. I didn't use ES pipelines myself and couldn't find any information on how to attach the .monitoring-beats-* indices to a pipeline I create. The best I got (using the links from Shaunak) is still pointing to the output section in the apm-server.yml. :man_shrugging:

Sorry @YvorL, I said output.elasticsearch.pipeline :man_facepalming: but I was thinking monitoring.elasticsearch.pipeline. There are many settings under output.elasticsearch.* that are also available under monitoring.elasticsearch.*, however pipeliene is unfortunately not one of them. So setting monitoring.elasticsearch.pipeline would not work either :man_facepalming: :man_facepalming:.

I'm thinking of another workaround.

1 Like

I appreciate it but not sure if it's worth it :smiley:
It's a nuisance for sure but the data ends up in Kibana and graphs are available. I'll upgrade the cluster in a couple of months and this will go away.

Thank you for your inputs!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.