I'm running kibana/elasticsearch/metricbeats 8.4.1 in a docker configuration.
Although I'm seeing the monitoring indexes being created (
.ds-metricbeat-8.4.1-2022.09.14-000001, .ds-.monitoring-kibana-8-mb-2022.09.14-000001 and .ds-.monitoring-es-8-mb-2022.09.14-000001) and documents are browsable via the discover view the stack monitoring view keeps saying that there is no monitoring data.
Any pointers anybody?
Same here using elasticsearch and kibana 8.4.1 + metricbeat 8.4.1, i have noticed the new index looks like this: ".ds-metricbeat-8.4.1-2022.09.14-000001".
The dot in front of the index name means that it is hidden? I can't see it in kibana anyway.
I have searched and I have not found the way to show the index in Kibana, please help.
Hi @P-T-I , @mikkel1_cu - Welcome to our community
I am not aware of any issues in 8.4.1 regarding Stack Monitoring... so let's see what we can find out.
If there is data in the .ds-.monitoring-*
backing indices (and yes, Metricbeat 8.x stores data in data streams), then you should be able to see the monitoring data in Stack Monitoring.
Can you run the below APIs and share the results with us:
GET _cat/templates
GET _cat/shards/.monitoring-*?v
GET _cat/templates:
.monitoring-kibana [.monitoring-kibana-7-*] 0 8010099
.monitoring-es [.monitoring-es-7-*] 0 8010099
.monitoring-beats [.monitoring-beats-7-*] 0 8010099
.monitoring-alerts-7 [.monitoring-alerts-7] 0 8010099
.monitoring-logstash [.monitoring-logstash-7-*] 0 8010099
synthetics-browser.screenshot [synthetics-browser.screenshot-*] 200 [synthetics-browser.screenshot@package, synthetics-browser.screenshot@custom, .fleet_globals-1, .fleet_agent_id_verification-1]
.monitoring-ent-search-mb [.monitoring-ent-search-8-*] 0 8000102 []
synthetics-http [synthetics-http-*] 200 [synthetics-http@package, synthetics-http@custom, .fleet_globals-1, .fleet_agent_id_verification-1]
.kibana-event-log-8.4.1-template [.kibana-event-log-8.4.1-*] 0 []
synthetics-tcp [synthetics-tcp-*] 200 [synthetics-tcp@package, synthetics-tcp@custom, .fleet_globals-1, .fleet_agent_id_verification-1]
.monitoring-es-mb [.monitoring-es-8-*] 0 8000102 []
metricbeat-8.4.1 [metricbeat-8.4.1] 150 []
synthetics-browser [synthetics-browser-*] 200 [synthetics-browser@package, synthetics-browser@custom, .fleet_globals-1, .fleet_agent_id_verification-1]
.slm-history [.slm-history-5*] 2147483647 5 []
logs [logs-*-*] 100 2 [logs-mappings, data-streams-mappings, logs-settings]
.watch-history-16 [.watcher-history-16*] 2147483647 16 []
.monitoring-beats-mb [.monitoring-beats-8-*] 0 8000102 []
.monitoring-kibana-mb [.monitoring-kibana-8-*] 0 8000102 []
synthetics [synthetics-*-*] 100 2 [synthetics-mappings, data-streams-mappings, synthetics-settings]
synthetics-browser.network [synthetics-browser.network-*] 200 [synthetics-browser.network@package, synthetics-browser.network@custom, .fleet_globals-1, .fleet_agent_id_verification-1]
ilm-history [ilm-history-5*] 2147483647 5 []
.ml-state [.ml-state*] 2147483647 8040199 []
.monitoring-logstash-mb [.monitoring-logstash-8-*] 0 8000102 []
.ml-anomalies- [.ml-anomalies-*] 2147483647 8040199 []
metrics [metrics-*-*] 100 2 [metrics-mappings, data-streams-mappings, metrics-settings]
.ml-notifications-000002 [.ml-notifications-000002] 2147483647 8040199 []
.deprecation-indexing-template [.logs-deprecation.*] 1000 1 [.deprecation-indexing-mappings, .deprecation-indexing-settings]
.ml-stats [.ml-stats-*] 2147483647 8040199 []
synthetics-icmp [synthetics-icmp-*] 200 [synthetics-icmp@package, synthetics-icmp@custom, .fleet_globals-1, .fleet_agent_id_verification-1]
GET _cat/shards/.monitoring-*?v :
.ds-.monitoring-kibana-8-mb-2022.09.14-000001 0 p STARTED 50550 18.7mb 172.25.1.8 es04
.ds-.monitoring-kibana-8-mb-2022.09.14-000001 0 r STARTED 42342 13.9mb 172.25.1.7 es03
.ds-.monitoring-es-8-mb-2022.09.14-000001 0 r STARTED 10111 10.1mb 172.25.1.7 es03
.ds-.monitoring-es-8-mb-2022.09.14-000001 0 p STARTED 10111 9.2mb 172.25.1.4 es02
Thank you @P-T-I , I am not seeing anything unusual in the list of templates - so we can assume that the monitoring data is correctly mapped.
-
Did you try to change the time frame in the time picker? (last 30min, 1h, etc.)
-
Does this search return any hits?
POST *:.monitoring-es-*,.monitoring-es-*/_search
{
"size": 10,
"query": {
"bool": {
"filter": [
{
"bool": {
"should": [
{
"term": {
"type": "cluster_stats"
}
},
{
"term": {
"metricset.name": "cluster_stats"
}
}
]
}
},
{
"range": {
"timestamp": {
"format": "epoch_millis",
"gte": "now-15m",
"lte": "now"
}
}
}
]
}
},
"collapse": {
"field": "cluster_uuid"
},
"sort": {
"timestamp": {
"order": "desc",
"unmapped_type": "long"
}
}
}
- Can you share a screenshot of the stack monitoring page (with the browser URL)?
- Yes I did; makes no difference;
- Result
{
"took": 56,
"timed_out": false,
"_shards": {
"total": 1,
"successful": 1,
"skipped": 0,
"failed": 0
},
"hits": {
"total": {
"value": 0,
"relation": "eq"
},
"max_score": null,
"hits": []
}
}
So no; it does not...
- Uploaded screenshot
The search request should have returned data if the monitoring data collected by Metricbeat contained the cluster_stats
metricset in the last 15min.
Can I know which module did you configure in Metricbeat? (c.f Elasticsearch module | Metricbeat Reference [8.4] | Elastic).
Are you using the elasticsearch
module or the xpack-elasticsearch
module? Perhaps you can share the given module configuration and remove any sensitive information (as you may have understood, we are interested to know which metricset is configured in the module you are using).
@ropc
Hereby the configuration used;
metricbeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
processors:
- add_cloud_metadata: ~
- add_docker_metadata: ~
output.elasticsearch:
hosts: '${ELASTICSEARCH_HOSTS:elasticsearch:9200}'
username: '${ELASTICSEARCH_USERNAME:}'
password: '${ELASTICSEARCH_PASSWORD:}'
ssl.verification_mode: certificate
ssl.certificate_authorities: ["/usr/share/metricbeat/config/certs/ca/ca.crt"]
setup.kibana:
host: "${HOST_KIBANA}"
setup.dashboards.enabled: true
metricbeat.modules:
- module: elasticsearch
xpack.enabled: true
period: 10s
ssl.verification_mode: certificate
ssl.certificate_authorities: ["/usr/share/metricbeat/config/certs/ca/ca.crt"]
hosts: ["https://es01:9200"]
protocol: 'https'
username: '${ELASTICSEARCH_USERNAME:}'
password: '${ELASTICSEARCH_PASSWORD:}'
- module: kibana
metricsets: ["status"]
period: 10s
hosts: ["http://kibana:5601"]
basepath: ""
enabled: true
xpack.enabled: true
username: '${ELASTICSEARCH_USERNAME:}'
password: '${ELASTICSEARCH_PASSWORD:}'
@P-T-I - if you are using the elasticsearch
module, the module configuration file should be located in ${path.config}/modules.d/elasticsearch.yml
- perhaps you want to check this? I am wondering if this is causing some sort of problem since you have also defined the module configuration in metricbeat.yml
${path.config}/modules.d/
directory just holds elasticsearch.yml.disabled
Ok fair enough - so there should not be any duplication of configuration. Can you run metricbeat modules list
, I just want to confirm the list of enabled/disabled modules.
We are getting somewhere, got:
Error initializing beat: error loading config file: config file ("metricbeat.yml") must be owned by the user identifier (uid=0) or root
Never seen that one pop-up anywhere; I will follow up on this....
Right, apparently there where some issues with file permissions to the metricbeat.yml file; fixed that; here is the list:
Enabled:
system
Disabled:
activemq
aerospike
airflow
apache
aws
awsfargate
azure
beat
beat-xpack
ceph
ceph-mgr
cloudfoundry
cockroachdb
consul
containerd
coredns
couchbase
couchdb
docker
dropwizard
elasticsearch
elasticsearch-xpack
enterprisesearch
enterprisesearch-xpack
envoyproxy
etcd
gcp
golang
graphite
haproxy
http
ibmmq
iis
istio
jolokia
kafka
kibana
kibana-xpack
kubernetes
kvm
linux
logstash
logstash-xpack
memcached
mongodb
mssql
munin
mysql
nats
nginx
openmetrics
oracle
php_fpm
postgresql
prometheus
rabbitmq
redis
redisenterprise
sql
stan
statsd
syncgateway
tomcat
traefik
uwsgi
vsphere
windows
zookeeper
@P-T-I - I did a quick test in the lab using a similar configuration with 8.4.1 and it works on my side.
Can you confirm that there are still documents being ingested in the .ds-.monitoring-es-8-mb-*
backing indices?
Just based on what we covered, it is as-if you are missing the cluster_stats
metricset. Can you enable debug logging in Metricbeat? To configure in the metricbeat.yml
file:
logging.level: debug
logging.selectors: ["*"]
Restart Metricbeat and check if any of these logs appear:
{"log.level":"debug","@timestamp":"2022-09-21T21:32:13.259+0800","log.logger":"module","log.origin":{"file.name":"module/wrapper.go","file.line":191},"message":"Starting metricSetWrapper[module=elasticsearch, name=cluster_stats, host=localhost:9200]","service.name":"metricbeat","ecs.version":"1.6.0"}
{"log.level":"debug","@timestamp":"2022-09-21T21:32:23.343+0800","log.logger":"processors","log.origin":{"file.name":"processing/processors.go","file.line":210},"message":"Publish event: {\n \"@timestamp\": \"2022-09-21T13:32:23.310Z\",\n \"@metadata\": {\n \"beat\": \"metricbeat\",\n \"type\": \"_doc\",\n \"version\": \"8.4.1\",\n \"index\": \".monitoring-es-8-mb\"\n },\n \"metricset\": {\n \"name\": \"cluster_stats\",\n ....}
If such logs related to cluster_stats
are present and there are no errors while publishing the events to Elasticsearch, we will need take a look at the Elasticsearch to understand why the data is not ingested.
If these logs are missing, then it is a problem in Metricbeat and we will need to check the list of warnings/errors.
@P-T-I - Just to add-on to my previous update, if none of the nodes you are monitoring refer to the elected master node, the cluster_stats
is not collected.
This is somehow documented in Collecting Elasticsearch monitoring data with Metricbeat | Elasticsearch Guide [8.4] | Elastic
When Metricbeat is monitoring Elasticsearch with scope: node then you must install a Metricbeat instance for each Elasticsearch node. If you don’t, some metrics will not be collected. Metricbeat with scope: node collects most of the metrics from the elected master of the cluster, so you must scale up all your master-eligible nodes to account for this extra load and you should not use this mode if you have dedicated master nodes.
You need to deploy Metricbeat on each the master nodes as well.
Right; that's probably what's going on then.... I'm only pointing Metricbeat towards the coordination node via the config... So I need to point to towards the master as well then?
Using the node
scope, you must install a Metricbeat instance for each Elasticsearch node (as per the above documentation).