It is still not working. Here is what I did.
Following this guide step by step:
- Set up the Elasticsearch cluster you want to use as the monitoring cluster
Done. I have set up an Elasticsearch container for monitoring in our Docker cluster. These are its settings:
node.name: "monitoring"
http.port: 9299
transport.tcp.port: 9399
cluster.name: "monitoring"
cluster.remote.connect: false
discovery.zen.minimum_master_nodes: 1
network.publish_host: "elastic-monitoring.mydomain.com"
node.master: true
node.data: true
node.ingest: true
xpack.monitoring.collection.enabled: false
1.a. (Optional) Verify that the collection of monitoring data is disabled on the monitoring cluster
Check, see last line of config.
Configure your production cluster to collect data and send it to the monitoring cluster
Use Metricbeat. There we go:
- Enable the collection of monitoring data
Our production cluster settings now looks like
{
"persistent": {
"xpack": {
"monitoring": {
"collection": {
"enabled": "true"
}
}
}
},
"transient": {}
}
- Install Metricbeat on each Elasticsearch node in the production cluster.
- Enable the Elasticsearch module in Metricbeat on each Elasticsearch node.
- Configure the Elasticsearch module in Metricbeat.
Installed and configured on the two data nodes of the production cluster. /etc/metricbeat/modules.d/elasticsearch.yml
looks like
- module: elasticsearch
metricsets:
- ccr
- cluster_stats
- index
- index_recovery
- index_summary
- ml_job
- node_stats
- shard
period: 10s
hosts:
- localhost:9200
xpack.enabled: true
- Identify where to send the monitoring data.
Makes perfect sense. This is the /etc/metricbeat/metricbeat.yml
:
metricbeat.config.modules:
enabled: true
path: ${path.config}/modules.d/*.yml
reload.enabled: false
reload.period: 10s
name: mynode
logging.files.permissions: 420
output.elasticsearch:
hosts:
- http://elastic-monitoring.mydomain.com:9299
- Start Metricbeat on each node.
Done. Everything seems to work. Log says
Connection to backoff(elasticsearch(http://elastic-monitoring.mydomain.com:9299)) established
Data is getting shipped smoothly. Running
curl elastic-monitoring.mydomain.com:9299/_cat/indices?v
brings
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
green open .monitoring-es-6-mb-2019.12.09 uVqajPYoTGOltsOJslZNaw 1 0 639 0 1mb 1mb
green open .kibana_1 d0Cxg_HJTKmO-0wVo5j79w 1 0 4 0 17.2kb 17.2kb
green open .kibana_task_manager LzVtyQ_GQuSvIaoWKvXpqQ 1 0 2 0 12.5kb 12.5kb
- Disable the default collection of Elasticsearch monitoring metrics.
Gonna do that. Our production cluster settings looks like this now:
{
"persistent": {
"xpack": {
"monitoring": {
"elasticsearch": {
"collection": {
"enabled": "false"
}
},
"collection": {
"enabled": "true"
}
}
}
},
"transient": {}
}
- View the monitoring data in Kibana.
For that I have deployed a separated Kibana instance in the same Docker cluster with the following settings:
server.name: "monitoring"
server.port: 5699
elasticsearch.hosts: "http://elastic-monitoring.mydomain.com:9299"
- Open Kibana in your web browser.
- In the side navigation, click Monitoring.
If you are using a separate monitoring cluster, you do not need to turn on data collection. The dashboards appear when there is data in the monitoring cluster.
Doing so only shows up this annoying screen
Actually turning on monitoring simply leads to monitoring the monitoring cluster.
Please help me finding the mistake.
Please support.
Best regards