Metricbeat writing to .ds-metricbeat instead of .ds-.monitoring

I have a new elastic cluster on the basic license. I have deployed metricbeat to it and it is pulling data from the new cluster and successfully outputting logs to our monitoring cluster.

The new cluster is not showing up in the "Stack Monitoring" section of the monitoring cluster's kibana. I believe this is because metricbeat is outputting to indices like .ds-metricbeat-8.12.1-2024.02.21-000020 instead of .ds-.monitoring-es-8-mb-2024.02.21-006824.

What would be causing that and how do I fix it?

You'll have to share your metricbeat.yml

And the module elasticsearch.yml as well the one in the metricbeat modules.d directory

I am using modules.d/elasticsearch-xpack.yml, should I be using modules.d/elasticsearch.yml instead?

I am not using filebeat. Do I have to use filebeat as well as metricbeat?

metricbeat.yml:

###################### Metricbeat Configuration Example #######################

# This file is an example configuration file highlighting only the most common
# options. The metricbeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/metricbeat/index.html

# =========================== Modules configuration ============================

metricbeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: false

  # Period on which files under path should be checked for changes
  #reload.period: 10s

# ======================= Elasticsearch template setting =======================

setup.template.settings:
  index.number_of_shards: 1
  index.codec: best_compression
  #_source.enabled: false


# ================================== General ===================================

# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:

# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]

# Optional fields that you can specify to add additional information to the
# output.
#fields:
#  env: staging

# ================================= Dashboards =================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here or by using the `setup` command.
#setup.dashboards.enabled: false

# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:

# =================================== Kibana ===================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
# setup.kibana:

  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  #host: "localhost:5601"

  # Kibana Space ID
  # ID of the Kibana Space into which the dashboards should be loaded. By default,
  # the Default Space will be used.
  #space.id:

# =============================== Elastic Cloud ================================

# These settings simplify using Metricbeat with the Elastic Cloud (https://cloud.elastic.co/).

# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:

# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:

# ================================== Outputs ===================================

# Configure what output to use when sending the data collected by the beat.

# ---------------------------- Elasticsearch Output ----------------------------
setup.ilm.enabled: true
setup.ilm.rollover_alias: "metricbeat-%{[agent.version]}"
setup.ilm.pattern: "000001"

output.elasticsearch:
  hosts: ["https://xxx:9200"]
  username: "metricbeat_monitoring_writer"
  password: "${metricbeat_monitoring_password}"
  ssl:
   verification_mode: "certificate"

logging.level: info
logging.to_files: true
logging.files:
  path: /var/log/metricbeat
  name: metricbeat
  keepfiles: 7
  permissions: 0644

# ------------------------------ Logstash Output -------------------------------
#output.logstash:
  # The Logstash hosts
  #hosts: ["localhost:5044"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

# ================================= Processors =================================

# Configure processors to enhance or manipulate events generated by the beat.

processors:
  - add_host_metadata: ~

# ================================== Logging ===================================

# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug

# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
#logging.selectors: ["*"]

# ============================= X-Pack Monitoring ==============================
# Metricbeat can export internal metrics to a central Elasticsearch monitoring
# cluster.  This requires xpack monitoring to be enabled in Elasticsearch.  The
# reporting is disabled by default.

# Set to true to enable the monitoring reporter.
#monitoring.enabled: false

# Sets the UUID of the Elasticsearch cluster under which monitoring data for this
# Metricbeat instance will appear in the Stack Monitoring UI. If output.elasticsearch
# is enabled, the UUID is derived from the Elasticsearch cluster referenced by output.elasticsearch.
#monitoring.cluster_uuid:

# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well.
# Note that the settings should point to your Elasticsearch *monitoring* cluster.
# Any setting that is not set is automatically inherited from the Elasticsearch
# output configuration, so if you have the Elasticsearch output configured such
# that it is pointing to your Elasticsearch monitoring cluster, you can simply
# uncomment the following line.
#monitoring.elasticsearch:

# ============================== Instrumentation ===============================

# Instrumentation support for the metricbeat.
#instrumentation:
    # Set to true to enable instrumentation of metricbeat.
    #enabled: false

    # Environment in which metricbeat is running on (eg: staging, production, etc.)
    #environment: ""

    # APM Server hosts to report instrumentation results to.
    #hosts:
    #  - http://localhost:8200

    # API Key for the APM Server(s).
    # If api_key is set then secret_token will be ignored.
    #api_key:

    # Secret token for the APM Server(s).
    #secret_token:


# ================================= Migration ==================================

# This allows to enable 6.7 migration aliases
#migration.6_to_7.enabled: true

elasticsearch-xpack.yml:

# Module: elasticsearch
# Docs: https://www.elastic.co/guide/en/beats/metricbeat/7.10/metricbeat-module-elasticsearch.html

- module: elasticsearch
  xpack.enabled: true
  period: 10s
  hosts: ["https://127.0.0.1:9200"]
  scope: node
  username: "remote_monitoring_user"
  password: "${remote_monitoring_password}"  
  ssl:
    enabled: true
    verification_mode: "certificate"

Are there any logs from metricbeat?

Do you actually see the metrics in the wrong index / data stream? Or is tye monitoring data not flowing at all?

Ohh I see it now....

Take out the setup.ilm.* lines That is overriding the proper index name I believe.. You don't need that. Not sure why you have that in there in the first place. It's all handed automatically for you in 8.x.

Put Those lines back to the default

1 Like

Ahh yes!! I will try this and report back. Thank you!

I took those lines out and restart metricbeat. It is still outputting to indices like this .ds-metricbeat-8.12.1-2024.02.21-000020

Quick question, do you have other clusters sending data to your monitoring cluster?

Are there any logs from metricbeat?

There are a bunch of info level logs, but no warn or error logs.

Do you actually see the metrics in the wrong index / data stream? Or is tye monitoring data not flowing at all?

Monitoring data is flowing, its just ending up in the .ds-metricbeat datasteam instead of the .ds-monitoring data stream.

Take out the setup.ilm.* lines...

I've taken those out but do I need to delete anything from Kibana since I had those lines in there originally?

Also did you press the button Setup Monitoring with Metricbeat?

Have you setup Self Monitoring at some point?

I literally just set up a new 8.12.1 cluster (no security) just for quick

Used the default

# Module: elasticsearch
# Docs: https://www.elastic.co/guide/en/beats/metricbeat/main/metricbeat-module-elasticsearch.html

- module: elasticsearch
  xpack.enabled: true
  period: 10s
  hosts: ["http://localhost:9200"]
  #username: "user"
  #password: "secret"
  #api_key: "foo:bar"

Pressed the Setup with Metricbeat when I went to monitoring

Everything there...

green open .ds-.monitoring-es-8-mb-2024.02.21-000001 2e3Eby4wQG-VSQGOGuzsDA 1 0 722 0 1.8mb 1.8mb 1.8mb

Yes, two other clusters. Those are both showing in Stack Monitoring and they are outputting to the .ds-monitoring-* indices.

The metricbeat agents installed on those clusters is the same version (8.12.1) as the metricbeat agents installed on the new cluster.

Also did you press the button Setup Monitoring with Metricbeat?

I'm not sure.. where is that button located?

Have you setup Self Monitoring at some point?
No, this is a brand new elastic cluster.

Is the monitoring cluster licensed?

Not sure how this work, monitoring different clusters is a licensed feature, but I would assume that both cluster, the monitoring and the monitored needs to have a license.

Is the monitoring cluster licensed?

Yes, the monitoring cluster has the platinum license.

The Monitored cluster has the Basic license.

You probably pressed it long ago in the monitoring cluster... :slight_smile:

Did you run setup after you cleaned up the yml ?

perhaps clean up the data registry for metricebeat

Also why Scop node (are you going to run on everynode? Oops

Did you run setup after you cleaned up the yml ?

no, I will do that now.

perhaps clean up the data registry for metricebeat

What does this mean?

Also why Scop node (are you going to run on everynode?

Yes, planning to run metricbeat on every node.

I ran /usr/bin/metricbeat setup and here is the output

Overwriting lifecycle policy is disabled. Set `setup.ilm.overwrite: true` to overwrite.
Index setup finished.
Loading dashboards (Kibana must be running and reachable)
Exiting: error connecting to Kibana: fail to get the Kibana version: HTTP GET request to http://localhost:5601/api/status fails: fail to execute the HTTP GET request: Get "http://localhost:5601/api/status": dial tcp 127.0.0.1:5601: connect: connection refused (status=0). Response:

Kibana requires authentication...
And perhaps https...
To run setup it needs to be properly configured.

run

/usr/bin/metricbeat setup -e

I had to specify setup.kibana.host and I also elevated permissions for the output.elasticsearch.username user and the setup command ran successfully. Although it is still returning

Overwriting lifecycle policy is disabled. Set `setup.ilm.overwrite: true` to overwrite.

You can set

setup.ilm.overwrite: true

run setup then comment it out.

But I do not think that is the issue...

That is a problem