Hi All,
I am using filebeat for the first time. How can i make sure that the actual logs are shipped to logstash from filebeat? How can i monitor that?
i can see filebeat is running without any issues.
I have enabled output to logstash in the config file. #----------------------------- Logstash output --------------------------------
output.logstash:
The Logstash hosts
hosts: ["10.85.7.207:5044"]
Is there any way i can see the actual logs, line by line shipping to logstash?
You can run filebeat with the -d publish flag, which will print on the log every event that is sent. If you're running it from the terminal you can also add the -e flag so it prints to stderr instead of using the log file.
It will also print any output errors such as not being able to reach logstash.
Just see the output
[root@appin1a filebeat]# filebeat -e -d "publish"
2018-04-12T13:21:13.072Z INFO instance/beat.go:468 Home path: [/usr/share/filebeat] Config path: [/etc/filebeat] Data path: [/var/lib/filebeat] Logs path: [/var/log/filebeat]
2018-04-12T13:21:13.072Z INFO instance/beat.go:475 Beat UUID: 20100434-5a20-4329-83be-a3b2f8358c4a
2018-04-12T13:21:13.072Z INFO instance/beat.go:213 Setup Beat: filebeat; Version: 6.2.3
2018-04-12T13:21:13.072Z INFO pipeline/module.go:76 Beat name: appin1a
2018-04-12T13:21:13.073Z INFO [monitoring] log/log.go:97 Starting metrics logging every 30s
2018-04-12T13:21:13.073Z INFO instance/beat.go:301 filebeat start running.
2018-04-12T13:21:13.073Z INFO registrar/registrar.go:108 Loading registrar data from /var/lib/filebeat/registry
2018-04-12T13:21:13.073Z INFO registrar/registrar.go:119 States Loaded from registrar: 0
2018-04-12T13:21:13.073Z WARN beater/filebeat.go:261 Filebeat is unable to load the Ingest Node pipelines for the configured modules because the Elasticsearch output is not configured/enabled. If you have already loaded the Ingest Node pipelines or are using Logstash pipelines, you can ignore this warning.
2018-04-12T13:21:13.073Z INFO crawler/crawler.go:48 Loading Prospectors: 1
2018-04-12T13:21:13.073Z INFO crawler/crawler.go:82 Loading and starting Prospectors completed. Enabled prospectors: 0
2018-04-12T13:21:13.073Z INFO cfgfile/reload.go:127 Config reloader started
2018-04-12T13:21:13.073Z INFO cfgfile/reload.go:219 Loading of config files completed.
2018-04-12T13:21:43.075Z INFO [monitoring] log/log.go:124 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":0,"time":9},"total":{"ticks":0,"time":15,"value":0},"user":{"ticks":0,"time":6}},"info":{"ephemeral_id":"c6d21d36-0063-458c-b873-1641974a69b6","uptime":{"ms":30008}},"memstats":{"gc_next":4473924,"memory_alloc":2804104,"memory_total":2804104,"rss":14155776}},"filebeat":{"harvester":{"open_files":0,"running":0}},"libbeat":{"config":{"module":{"running":0},"reloads":1},"output":{"type":"logstash"},"pipeline":{"clients":0,"events":{"active":0}}},"registrar":{"states":{"current":0}},"system":{"cpu":{"cores":8},"load":{"1":3.08,"15":3,"5":3.04,"norm":{"1":0.385,"15":0.375,"5":0.38}}}}}}
#================================ Logging =====================================
# Sets log level. The default log level is info.
# Available log levels are: critical, error, warning, info, debug
logging.level: debug
# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
logging.selectors: ["publish"]
Just to add on the "how to", you can use the piece of config shown above(can be found at the bottom of /etc/filebeat/filebeat.yml). It is the same as starting filebeat with -d(debug) and passing a selector(publish).
Then simply restart/reload and you can tail you logfile and monitor your outputs.
As far as your logs go it seems to be working but finds nothing to send on the past 30 seconds. Have you setup inputs,paths etc? Do those logs have new entries on the time you are doing this monitoring?
Still i can't see anything in the output except the same set of strings. I am trying to see the exact logs sent from a component named "cmdc"
A sample text:
2018-04-12T13:36:43.074Z INFO [monitoring] log/log.go:124 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":30,"time":30},"total":{"ticks":80,"time":82,"value":80},"user":{"ticks":50,"time":52}},"info":{"ephemeral_id":"c6d21d36-0063-458c-b873-1641974a69b6","uptime":{"ms":930007}},"memstats":{"gc_next":4194304,"memory_alloc":1827904,"memory_total":8080760,"rss":12288}},"filebeat":{"harvester":{"open_files":0,"running":0}},"libbeat":{"config":{"module":{"running":0}},"pipeline":{"clients":0,"events":{"active":0}}},"registrar":{"states":{"current":0}},"system":{"load":{"1":3.02,"15":3,"5":3.04,"norm":{"1":0.3775,"15":0.375,"5":0.38}}}}}}
Copy and paste this one
Make sure you respect the yaml format(2 spaces indentation for a child property).
I suspect the enabled:false means "don't use this one" and exists for testing/debugging and convenience in some cases.
As i mentioned the problem was not indentation(it was a suggestion, a "just in case") but the enabled:false.
If you see the comment it does say switch to true(or just remove it, defaults to true) otherwise this prospector will be ignored(no harvesting will be done).
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.