MetricBeat Stopping

I ran 2 exact installations of MetricBeat and the beat works on 1 (I See data on Kibana) but the other one gives this error. For my naked eye everything looks identical! Can someone please help?

2018-09-02T18:05:49.337-0400 DEBUG [modules] beater/metricbeat.go:81 Register [ModuleFactory:[docker, mongodb, mysql, postgresql, system, uwsgi], MetricSetFactory:[aerospike/namespace, apache/status, ceph/cluster_disk, ceph/cluster_health, ceph/cluster_status, ceph/monitor_health, ceph/osd_df, ceph/osd_tree, ceph/pool_disk, couchbase/bucket, couchbase/cluster, couchbase/node, docker/container, docker/cpu, docker/diskio, docker/healthcheck, docker/image, docker/info, docker/memory, docker/network, dropwizard/collector, elasticsearch/node, elasticsearch/node_stats, etcd/leader, etcd/self, etcd/store, golang/expvar, golang/heap, graphite/server, haproxy/info, haproxy/stat, http/json, http/server, jolokia/jmx, kafka/consumergroup, kafka/partition, kibana/status, kubernetes/container, kubernetes/event, kubernetes/node, kubernetes/pod, kubernetes/state_container, kubernetes/state_deployment, kubernetes/state_node, kubernetes/state_pod, kubernetes/state_replicaset, kubernetes/state_statefulset, kubernetes/system, kubernetes/volume, kvm/dommemstat, logstash/node, logstash/node_stats, memcached/stats, mongodb/collstats, mongodb/dbstats, mongodb/status, munin/node, mysql/status, nginx/stubstatus, php_fpm/pool, postgresql/activity, postgresql/bgwriter, postgresql/database, prometheus/collector, prometheus/stats, rabbitmq/connection, rabbitmq/node, rabbitmq/queue, redis/info, redis/keyspace, system/core, system/cpu, system/diskio, system/filesystem, system/fsstat, system/load, system/memory, system/network, system/process, system/process_summary, system/raid, system/socket, system/uptime, uwsgi/status, vsphere/datastore, vsphere/host, vsphere/virtualmachine, zookeeper/mntr]]
2018-09-02T18:05:49.337-0400 INFO [monitoring] log/log.go:97 Starting metrics logging every 30s
2018-09-02T18:05:49.337-0400 INFO instance/beat.go:315 metricbeat start running.
2018-09-02T18:05:49.337-0400 DEBUG [cfgfile] cfgfile/reload.go:90 Checking module configs from: /etc/metricbeat/modules.d/*.yml
2018-09-02T18:05:49.337-0400 DEBUG [cfgfile] cfgfile/cfgfile.go:143 Load config from file: /etc/metricbeat/modules.d/kafka.yml
2018-09-02T18:05:49.337-0400 ERROR cfgfile/reload.go:232 Error loading config: invalid config: yaml: line 1: did not find expected '-' indicator
2018-09-02T18:05:49.337-0400 DEBUG [cfgfile] cfgfile/cfgfile.go:143 Load config from file: /etc/metricbeat/modules.d/system.yml
2018-09-02T18:05:49.339-0400 INFO [monitoring] log/log.go:132 Total non-zero metrics {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":40,"time":{"ms":41}},"total":{"ticks":50,"time":{"ms":57},"value":50},"user":{"ticks":10,"time":{"ms":16}}},"info":{"ephemeral_id":"b0775674-0754-4b18-9524-1c23d5bbed09","uptime":{"ms":50}},"memstats":{"gc_next":4194304,"memory_alloc":2621784,"memory_total":4234112,"rss":20299776}},"libbeat":{"config":{"module":{"running":0}},"output":{"type":"elasticsearch"},"pipeline":{"clients":0,"events":{"active":0}}},"system":{"cpu":{"cores":6},"load":{"1":0.03,"15":0.05,"5":0.02,"norm":{"1":0.005,"15":0.0083,"5":0.0033}}}}}}
2018-09-02T18:05:49.339-0400 INFO [monitoring] log/log.go:133 Uptime: 51.54977ms
2018-09-02T18:05:49.339-0400 INFO [monitoring] log/log.go:110 Stopping metrics logging.
2018-09-02T18:05:49.339-0400 INFO instance/beat.go:321 metricbeat stopped.
2018-09-02T18:05:49.342-0400 ERROR instance/beat.go:691 Exiting: 1 error: invalid config: yaml: line 1: did not find expected '-' indicator

This usually means there is an error in some of your config files, this one looks like something inside the modules.d folder, did you change any file there? I would review the enabled modules, it seems at least one of them doesn't start with the expected -.

I figured it out, it had to do with Kafka.yml. the file did not have 2 spaces AFTER the first line. Fixed it and started working! Thanks a lot.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.