I'm trying to setup my first filebeat forwarder after having used logstash-forwarder for quite a while.
When I try to start up filebeat I'm getting this error:
`[root@web1:/etc/filebeat] #systemctl status filebeat.service
● filebeat.service - LSB: Sends log files to Logstash or directly to Elasticsearch.
Loaded: loaded (/etc/rc.d/init.d/filebeat)
Active: failed (Result: exit-code) since Sun 2016-01-31 20:58:29 EST; 6s ago
Docs: man:systemd-sysv-generator(8)
Process: 5579 ExecStart=/etc/rc.d/init.d/filebeat start (code=exited, status=1/FAILURE)
Jan 31 20:58:29 web1 systemd[1]: Starting LSB: Sends log files to Logstash or directly to Elasticsearch....
Jan 31 20:58:29 web1 filebeat[5579]: Starting filebeat: Loading config file error: YAML config parsing failed on /etc/filebeat/filebeat.yml: yaml: line 228: did not find expected key. Exiting.
Jan 31 20:58:29 web1 systemd[1]: filebeat.service: control process exited, code=exited status=1
Jan 31 20:58:29 web1 systemd[1]: Failed to start LSB: Sends log files to Logstash or directly to Elasticsearch..
Jan 31 20:58:29 web1 systemd[1]: Unit filebeat.service entered failed state.
Jan 31 20:58:29 web1 systemd[1]: filebeat.service failed.`
This only happens if I try to enable the TLS settings in the config.Otherwise it starts fine, but I don't want to ship logs minus TLS.
Here's the line that the error is complaining about:
logstash:
# The Logstash hosts
hosts: ["logs.example.com:2541"]
But I think that the problem is up in the TLS section, because if I comment it out I can start it up:
# tls configuration. By default is off.
tls:
# List of root certificates for HTTPS server verifications
certificate_authorities: ["/etc/pki/CA/certs/ca.crt"]
This is indeed a problem with how you format your YAML file, most likely related to indentation. Please edit your post and format the configuration snippets as code so that leading space isn't deleted.
please format code by enclosing it with 3 backticks (```) to make it readable.
The tls section for the logstash output must be configued, there is no tls config in output section. Looks like you configured tls for the elasticsearch output by accident.
Do you really don't have any indentation in your configuration file? If not, make sure you address that first. YAML is sensitive to indentation. If you do have lines indented, please make another attempt at formatting the file properly here. We really cannot help you otherwise.
#systemctl status filebeat.service
● filebeat.service - LSB: Sends log files to Logstash or directly to Elasticsearch.
Loaded: loaded (/etc/rc.d/init.d/filebeat)
Active: failed (Result: exit-code) since Mon 2016-02-01 12:47:54 EST; 1min 35s ago
Docs: man:systemd-sysv-generator(8)
Process: 17067 ExecStop=/etc/rc.d/init.d/filebeat stop (code=exited, status=0/SUCCESS)
Process: 19755 ExecStart=/etc/rc.d/init.d/filebeat start (code=exited, status=1/FAILURE)
Feb 01 12:47:54 web1 systemd[1]: Starting LSB: Sends log files to Logstash or directly to Elasticsearch....
Feb 01 12:47:54 web1 filebeat[19755]: Starting filebeat: Loading config file error: YAML config parsing failed on /etc/filebeat/filebeat.yml: yaml.... Exiting.
Feb 01 12:47:54 web1 systemd[1]: filebeat.service: control process exited, code=exited status=1
Feb 01 12:47:54 web1 systemd[1]: Failed to start LSB: Sends log files to Logstash or directly to Elasticsearch..
Feb 01 12:47:54 web1 systemd[1]: Unit filebeat.service entered failed state.
Feb 01 12:47:54 web1 systemd[1]: filebeat.service failed.
Hint: Some lines were ellipsized, use -l to show in full.
if you could help me through this part I'd appreciate it!
Unfortunately client authentication is not yet supported by logstash, thusly certificate and certificate_key is not really required. Can not hurt to have these configured to have config prepared for time client authentication will be available.
Many thanks!! That worked!! Proper indentation did the trick.
#systemctl status filebeat
● filebeat.service - LSB: Sends log files to Logstash or directly to Elasticsearch.
Loaded: loaded (/etc/rc.d/init.d/filebeat)
Active: active (running) since Wed 2016-02-03 20:01:06 EST; 10s ago
Docs: man:systemd-sysv-generator(8)
Process: 17067 ExecStop=/etc/rc.d/init.d/filebeat stop (code=exited, status=0/SUCCESS)
Process: 31803 ExecStart=/etc/rc.d/init.d/filebeat start (code=exited, status=0/SUCCESS)
CGroup: /system.slice/filebeat.service
├─32008 filebeat-god -r / -n -p /var/run/filebeat.pid -- /usr/bin/filebeat -c /etc/filebeat/fi...
└─32009 /usr/bin/filebeat -c /etc/filebeat/filebeat.yml
Feb 03 20:00:52 web1 systemd[1]: Starting LSB: Sends log files to Logstash or directly to Elasticsearch....
Feb 03 20:01:06 web1 filebeat[31803]: Starting filebeat: 2016/02/04 01:01:06.006783 transport.go:125:...peer
Feb 03 20:01:06 web1 filebeat[31803]: [ OK ]
Feb 03 20:01:06 web1 systemd[1]: Started LSB: Sends log files to Logstash or directly to Elasticsearch..
Hint: Some lines were ellipsized, use -l to show in full.```
Ok, so I enabled logging for filebeat. And this is what I'm getting in the log.
I see a bunch of log entries like this:
2016-02-04T23:33:10-05:00 DBG Update existing file for harvesting: /var/log/proftpd/proftpd.sql.log
2016-02-04T23:33:10-05:00 DBG Not harvesting, file didn't change: /var/log/proftpd/proftpd.sql.log
2016-02-04T23:33:10-05:00 DBG Check file for harvesting: /var/log/tuned/tuned.log
2016-02-04T23:33:10-05:00 DBG Update existing file for harvesting: /var/log/tuned/tuned.log
2016-02-04T23:33:10-05:00 DBG Not harvesting, file didn't change: /var/log/tuned/tuned.log
2016-02-04T23:33:10-05:00 DBG Check file for harvesting: /var/log/zabbix/zabbix_agentd.log
2016-02-04T23:33:10-05:00 DBG Update existing file for harvesting: /var/log/zabbix/zabbix_agentd.log
2016-02-04T23:33:10-05:00 DBG Not harvesting, file didn't change: /var/log/zabbix/zabbix_agentd.log
And then I see the following:
2016-02-04T23:33:12-05:00 DBG Try to publish %!s(int=200) events to logstash with window size %!s(int=10)
2016-02-04T23:33:12-05:00 DBG %!s(int=0) events out of %!s(int=200) events sent to logstash. Continue sending ...
2016-02-04T23:33:12-05:00 INFO Error publishing events (retrying): EOF
2016-02-04T23:33:12-05:00 DBG Try to publish %!s(int=200) events to logstash with window size %!s(int=10)
2016-02-04T23:33:12-05:00 DBG %!s(int=0) events out of %!s(int=200) events sent to logstash. Continue sending ...
2016-02-04T23:33:12-05:00 INFO Error publishing events (retrying): EOF
2016-02-04T23:33:12-05:00 INFO send fail
2016-02-04T23:33:12-05:00 INFO backoff retry: 1s
2016-02-04T23:33:13-05:00 DBG Try to publish %!s(int=200) events to logstash with window size %!s(int=10)
2016-02-04T23:33:13-05:00 DBG %!s(int=0) events out of %!s(int=200) events sent to logstash. Continue sending ...
2016-02-04T23:33:13-05:00 INFO Error publishing events (retrying): EOF
2016-02-04T23:33:13-05:00 INFO send fail
I'm thinking these lines are probably important:
2016-02-04T23:37:29-05:00 DBG Try to publish %!s(int=200) events to logstash with window size %!s(int=10)
2016-02-04T23:37:29-05:00 DBG %!s(int=0) events out of %!s(int=200) events sent to logstash. Continue sending ...
2016-02-04T23:37:29-05:00 INFO Error publishing events (retrying): EOF
Now that we know all this, how can I fix this problem? Is there anything else I could look for in the logs? How can we fix the problem of no logs getting through to Logstash?
Yes, those are indeed the interesting lines from the log. Unfortunately I don't know what they mean. What version of the beats plugin are you running on the Logstash side? And what version of Filebeat?
I'm running version 1.01 of filebeats on the web server
filebeat version 1.0.1 (amd64)
And haven't started running filebeats on the Logstash server. I wanted to try to get it running on one server first before I started running it anywhere else. I figured that it would be like a drop-in replacement for lumberjack/logstash-forwarder.
I'm not sure what those lines mean either. Maybe you could ask around?
And haven't started running filebeats on the Logstash server. I wanted to try to get it running on one server first before I started running it anywhere else. I figured that it would be like a drop-in replacement for lumberjack/logstash-forwarder.
Wait, what? You're still running the lumberjack input plugin in Logstash rather than the beats plugin? If so that's the problem. Filebeat will not work with the lumberjack input. I believe the protocols are very similar but they're not compatible.
Maybe you could ask around?
Hopefully one of the Filebeat folks are reading this thread.
Oh uhhhh.... yeah! I didn't realize they required their own plugin. I read that they were replacing logstash-forwarder to I thought they used the same input. Now I get you! I'll add the beats input and see where that takes me. Makes total sense at this point! Thanks!
Thanks for the clue-in! Adding the beats plugin on the logstash side worked of course. Log messages are flowing in now. As an example I'm seeing this in the filebeats log file at this point.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.