I am running a Metricbeat 7.13.2 (first tried with 7.7.1) on Ubuntu 18.04 which is supposed to send monitoring data to Elastic Cloud instance (7.10.0) using the internal collection, but I don't understand where things are going wrong. Here is the config for metricbeat:
path.home: /opt/metricbeat
path.config: /opt/metricbeat
path.data: /var/lib/metricbeat
path.logs: /var/log/metricbeat
metricbeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
setup.template:
name: "metricbeat-7.13.2"
pattern: "metricbeat-7.13.2-*"
settings:
index.number_of_shards: 1
index.codec: best_compression
cloud:
id: <REDUCTED>
auth: <REDUCTED>
logging.level: info
logging.metrics.enabled: true
logging.metrics.period: "30s"
logging.to_files: true
logging.files:
name: metricbeat.log
keepfiles: 10
permissions: 0600
monitoring:
enabled: true
cloud.id: <REDUCTED>
cloud.auth: <REDUCTED>
Metricbeat is running without any issues and I can see the INFO log messages with the stats /var/log/metricbeat/metricbeat.log
2021-06-30T09:26:20.711Z INFO [publisher_pipeline_output] pipeline/output.go:151 Connection to backoff(monitoring(<REDUCTED>)) established
2021-06-30T09:26:40.560Z INFO [monitoring] log/log.go:144 Non-zero metrics in the last 30s {"monitoring":..........
2021-06-30T09:27:10.560Z INFO [monitoring] log/log.go:144 Non-zero metrics in the last 30s {"monitoring":..........
In the last 24h, it only sent data sporadically and then it stops without any indication in metricbeat logs why that is happening. Restarting metricbeat does not help at all. Here is a sample pic on how this looks like.
NOTE: The weird thing is that I also have a Filebeat instance running on the same host with the exact version and monitoring setup as Metricbeat, but that is working without any issues.
Any help would be highly appreciated.