Metricbeat not loading in ES indices

I end up here after several hours of pain.

I setup a Elasticsearch / Kibana (no Logstash) installation to monitor RAM/CPU/and so on on a ubuntu VM.

I installed metricbeat but I do not manage to have the MB indice loaded in elastic.

I setup the elastic user, thinking about a permission issue, but it did not changed a thing.

Here is the curl localhost:9200/_cat/indices?v result:
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
yellow open .kibana MWFHgWW1QKqks2zGbaHEjg 1 1 1 0 3.1kb 3.1kb
green open cowrie Jp1UO0StTkWO25YMKrXwPA 1 0 0 0 160b 160b
yellow open test NIKJEtHIRUOEQPzSScKrjw 5 1 0 0 800b 800b

Here is the metricbeat.yml:
###################### Metricbeat Configuration Example #######################

# This file is an example configuration file highlighting only the most common
# options. The metricbeat.full.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/metricbeat/index.html

#==========================  Modules configuration ============================
metricbeat.modules:

#------------------------------- System Module -------------------------------
- module: system
  metricsets:
# CPU stats
- cpu

# System Load stats
- load

# Per CPU core stats
- core

# IO stats
- diskio

# Per filesystem stats
- filesystem

# File system summary stats
- fsstat

# Memory stats
- memory

# Network stats
- network

# Per process stats
- process

# Sockets (linux only)
- socket
  enabled: true
  period: 10s
  processes: ['.*']

- module: apache
 metricsets: ["status"]
 enabled: true
 period: 1s
 hosts: ["http://127.0.0.1"]

#================================ General =====================================

# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:

# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]

# Optional fields that you can specify to add additional information to the
# output.
#fields:
#  env: staging

#================================ Outputs =====================================

# Configure what outputs to use when sending the data collected by the beat.
# Multiple outputs may be used.

#-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["127.0.0.1:9200"]
  enabled: true

setup.kibana:
  host: "localhost:5601"

  # Optional protocol and basic auth credentials.
  #protocol: "https"
  username: "elasticsearch"
  password: "test"

#----------------------------- Logstash output --------------------------------
#output.logstash:
  # The Logstash hosts
  #hosts: ["localhost:5044"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

#================================ Logging =====================================

# Sets log level. The default log level is info.
# Available log levels are: critical, error, warning, info, debug
#logging.level: debug

# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
#logging.selectors: ["*"]

setup.dashboards.enabled: true

The elasticsearch.yml:
# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
# Before you set out to tweak and tune the configuration, make sure you
# understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: honeymap
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: honeynode
#note.master: false
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
#path.data: /path/to/data
#
# Path to log files:
#
#path.logs: /path/to/logs
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: 127.0.0.1
#
# Set a custom port for HTTP:
#
http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when new node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.zen.ping.unicast.hosts: ["host1", "host2"]
#
# Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1):
#
#discovery.zen.minimum_master_nodes: 3
#
# For more information, consult the zen discovery module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true

If anybody has an idea...

What do the metricbeat logs show?

Yaml is indentation sensitive and it looks to me like your Metricbeat modules section may not be correct.

There is no logs... :frowning:
I did not find any in /etc/metricbeat, /usr/share/metricbeat or /var/log

I hope it is so simple.
I will try with a new file.
By the way, any issue with the host being localhost or 127.0.0.1?

I removed metricbeat, rebooted and installed metricbeat again.
When I list the indices in elasticsearch, nothing.
I tried to stop/restart each service, but nothing.

Have you tested your config file? If so, what is the result?

Perfect!
The error was in the metricbeat.yml file.
I must have had an extra space.
I removed the apache part and it works for the system module.
Now, I will try the apache module!
Thanks!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.