I'm using Metricbeat in my Docker Swarm and have managed to get the system model pushing CPU and memory metrics to Elastic Search. I can then see the metrics in the sample dashboard in Kibana. However, I can't seem to get the Docker module working. I have enabled it but no logs are shipped and I can't see anything in Kibana for the 'Metricbeat Docker' dashboard. I'm using
the docker.elastic.co/beats/metricbeat:5.6.2 image with Docker 17.06-CE.
This is the metricbeat service in my docker-stack.yml file:
#============================ Config Reloading ===============================
# Config reloading allows to dynamically load modules. Each file which is
# monitored must contain one or multiple modules as a list.
metricbeat.config.modules:
# Glob pattern for configuration reloading
path: ${path.config}/conf.d/*.yml
# Period on which files under path should be checked for chagnes
reload.period: 10s
# Set to true to enable config reloading
reload.enabled: false
#========================== Modules configuration ============================
metricbeat.modules:
#------------------------------- System Module -------------------------------
- module: system
metricsets:
# CPU stats
- cpu
# System Load stats
- load
# Per CPU core stats
- core
# IO stats
- diskio
# Per filesystem stats
- filesystem
# File system summary stats
- fsstat
# Memory stats
- memory
# Network stats
- network
# Per process stats
- process
# Sockets and connection info (linux only)
#- socket
enabled: true
period: 10s
processes: ['.*']
#------------------------------- Docker Module -------------------------------
- module: docker
metricsets: ["container", "cpu", "diskio", "healthcheck", "info", "memory", "network"]
hosts: ["unix:///var/run/docker.sock"]
enabled: true
period: 10s
#-------------------------------- Redis Module -------------------------------
- module: redis
metricsets: ["info", "keyspace"]
enabled: true
period: 10s
# Redis hosts
hosts: ["redis"]
# Timeout after which time a metricset should return an error
# Timeout is by default defined as period, as a fetch of a metricset
# should never take longer then period, as otherwise calls can pile up.
timeout: 1s
#================================ General ======================================
# The tags of the shipper are included in their own field with each
# transaction published. Tags make it easy to group servers by different
# logical properties.
tags: ${MetricbeatTags}
#================================ Outputs ======================================
# Configure what outputs to use when sending the data collected by the beat.
# Multiple outputs may be used.
#-------------------------- Elasticsearch output -------------------------------
output.elasticsearch:
# Boolean flag to enable or disable the output module.
enabled: true
# Array of hosts to connect to.
# Scheme and port can be left out and will be set to the default (http and 9200)
# In case you specify and additional path, the scheme is required: http://localhost:9200/path
# IPv6 addresses should always be defined as: https://[2001:db8::1]:9200
hosts: ${MetricbeatHosts}
# Number of workers per Elasticsearch host.
worker: 1
# Optional index name. The default is "metricbeat" plus date
# and generates [metricbeat-]YYYY.MM.DD keys.
index: "metricbeat-%{+yyyy.MM.dd}"
# Configure http request timeout before failing an request to Elasticsearch.
timeout: 90
# Path to template file
template.path: "metricbeat.template.json"
# Overwrite existing template
template.overwrite: true
#================================ Logging ======================================
logging.to_files: true
logging.files:
This may be a permission issue while accessing docker socket, do you see any error messages about that? Something you can try is running the container with user: root and see if the problem is gone
In a nutshell: Metricbeat needs access to the Docker socket to retrieve stats from it, but the docker image we ship runs as a user by default (to make it more secure). There are several ways to allow Metricbeat accessing the docker socket:
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.