After restarting the Filebeat service seeing the following exception - Error starting the servererrorlisten tcp 127.0.0.1:514: bind: address already in use

Hi,

Our Goal -

The goal of the tutorial is to set up Logstash to gather syslogs of multiple servers, and set up Kibana to visualize the gathered logs.

45%20PM

When we did the setup for the first time, this whole flow worked and we were able to visualize the logs in Kibana dashboard.

But when we did a restart of the aws instance and restarted the filebeat service we are seeing the following error stack in the log. Could someone help us here to resolve this issue ?

Filebeat log trace :

2018-07-26T18:42:37.978Z	INFO	crawler/crawler.go:82	Loading and starting Inputs completed. Enabled inputs: 2
2018-07-26T18:42:37.978Z	INFO	[syslog]	syslog/input.go:156	Starting Syslog input	{"protocol": "tcp"}
2018-07-26T18:42:37.979Z	ERROR	[syslog]	syslog/input.go:159	Error starting the servererrorlisten tcp 127.0.0.1:514: bind: address already in use
2018-07-26T18:42:37.979Z	INFO	[syslog]	syslog/input.go:156	Starting Syslog input	{"protocol": "udp"}
2018-07-26T18:42:37.979Z	ERROR	[syslog]	syslog/input.go:159	Error starting the servererrorlisten udp 127.0.0.1:514: bind: address already in use
2018-07-26T18:42:37.979Z	INFO	cfgfile/reload.go:122	Config reloader started
2018-07-26T18:42:37.979Z	INFO	cfgfile/reload.go:214	Loading of config files completed.
2018-07-26T18:43:07.979Z	INFO	[monitoring]	log/log.go:124	Non-zero metrics in the last 30s	{"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":10,"time":{"ms":10}},"total":{"ticks":10,"time":{"ms":17},"value":10},"user":{"ticks":0,"time":{"ms":7}}},"info":{"ephemeral_id":"5e295a04-b397-4323-bb80-615037116c0a","uptime":{"ms":30011}},"memstats":{"gc_next":4473924,"memory_alloc":3227752,"memory_total":3227752,"rss":12996608}},"filebeat":{"harvester":{"open_files":0,"running":0}},"libbeat":{"config":{"module":{"running":0},"reloads":1},"output":{"type":"elasticsearch"},"pipeline":{"clients":2,"events":{"active":0}}},"registrar":{"states":{"current":0}},"system":{"cpu":{"cores":2},"load":{"1":0.22,"15":0.39,"5":0.42,"norm":{"1":0.11,"15":0.195,"5":0.21}}}}}}
2018-07-26T18:43:37.979Z	INFO	[monitoring]	log/log.go:124	Non-zero metrics in the last 30s	{"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":10,"time":{"ms":1}},"total":{"ticks":10,"time":{"ms":3},"value":10},"user":{"ticks":0,"time":{"ms":2}}},"info":{"ephemeral_id":"5e295a04-b397-4323-bb80-615037116c0a","uptime":{"ms":60010}},"memstats":{"gc_next":4473924,"memory_alloc":3418632,"memory_total":3418632}},"filebeat":{"harvester":{"open_files":0,"running":0}},"libbeat":{"config":{"module":{"running":0}},"pipeline":{"clients":2,"events":{"active":0}}},"registrar":{"states":{"current":0}},"system":{"load":{"1":0.58,"15":0.41,"5":0.48,"norm":{"1":0.29,"15":0.205,"5":0.24}}}}}}
2018-07-26T18:44:07.979Z	INFO	[monitoring]	log/log.go:124	Non-zero metrics in the last 30s	{"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":10,"time":{"ms":1}},"total":{"ticks":20,"time":{"ms":2},"value":20},"user":{"ticks":10,"time":{"ms":1}}},"info":{"ephemeral_id":"5e295a04-b397-4323-bb80-615037116c0a","uptime":{"ms":90010}},"memstats":{"gc_next":4473924,"memory_alloc":3596264,"memory_total":3596264,"rss":229376}},"filebeat":{"harvester":{"open_files":0,"running":0}},"libbeat":{"config":{"module":{"running":0}},"pipeline":{"clients":2,"events":{"active":0}}},"registrar":{"states":{"current":0}},"system":{"load":{"1":0.35,"15":0.4,"5":0.43,"norm":{"1":0.175,"15":0.2,"5":0.215}}}}}}
2018-07-26T18:44:37.979Z	INFO	[monitoring]	log/log.go:124	Non-zero metrics in the last 30s	{"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":10,"time":{"ms":1}},"total":{"ticks":20,"time":{"ms":2},"value":20},"user":{"ticks":10,"time":{"ms":1}}},"info":{"ephemeral_id":"5e295a04-b397-4323-bb80-615037116c0a","uptime":{"ms":120010}},"memstats":{"gc_next":4473924,"memory_alloc":3785272,"memory_total":3785272,"rss":217088}},"filebeat":{"harvester":{"open_files":0,"running":0}},"libbeat":{"config":{"module":{"running":0}},"pipeline":{"clients":2,"events":{"active":0}}},"registrar":{"states":{"current":0}},"system":{"load":{"1":0.21,"15":0.39,"5":0.39,"norm":{"1":0.105,"15":0.195,"5":0.195}}}}}}
2018-07-26T18:45:07.979Z	INFO	[monitoring]	log/log.go:124	Non-zero metrics in the last 30s	{"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":10,"time":{"ms":3}},"total":{"ticks":20,"time":{"ms":4},"value":20},"user":{"ticks":10,"time":{"ms":1}}},"info":{"ephemeral_id":"5e295a04-b397-4323-bb80-615037116c0a","uptime":{"ms":150010}},"memstats":{"gc_next":4194304,"memory_alloc":1402336,"memory_total":3969800,"rss":569344}},"filebeat":{"harvester":{"open_files":0,"running":0}},"libbeat":{"config":{"module":{"running":0}},"pipeline":{"clients":2,"events":{"active":0}}},"registrar":{"states":{"current":0}},"system":{"load":{"1":0.28,"15":0.39,"5":0.39,"norm":{"1":0.14,"15":0.195,"5":0.195}}}}}}
2018-07-26T18:45:37.979Z	INFO	[monitoring]	log/log.go:124	Non-zero metrics in the last 30s	{"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":10,"time":{"ms":2}},"total":{"ticks":20,"time":{"ms":3},"value":20},"user":{"ticks":10,"time":{"ms":1}}},"info":{"ephemeral_id":"5e295a04-b397-4323-bb80-615037116c0a","uptime":{"ms":180010}},"memstats":{"gc_next":4194304,"memory_alloc":1585528,"memory_total":4152992}},"filebeat":{"harvester":{"open_files":0,"running":0}},"libbeat":{"config":{"module":{"running":0}},"pipeline":{"clients":2,"events":{"active":0}}},"registrar":{"states":{"current":0}},"system":{"load":{"1":0.17,"15":0.37,"5":0.35,"norm":{"1":0.085,"15":0.185,"5":0.175}}}}}}

OS - Centos 7
FileBeat Version 6.3

Thanks,

Can you stop the filebeat service and run sudo lsof -i :514? That should tell you what process is listening on port 514. Depending on what that process is, you may want to stop or kill it and then try to restart the filebeat service.

Thanks @shaunak !

Here is the out-put of sudo lsof -i :514

COMMAND    PID USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
rsyslogd 14289 root    3u  IPv4  51284      0t0  UDP *:syslog 
rsyslogd 14289 root    4u  IPv6  51285      0t0  UDP *:syslog 
rsyslogd 14289 root    5u  IPv4  51288      0t0  TCP *:shell (LISTEN)
rsyslogd 14289 root    6u  IPv6  51289      0t0  TCP *:shell (LISTEN)

So it looks like you already have rsyslogd running on the host and listening on port 514 for syslog messages. This is causing Filebeat not to be able to start up it's syslog input on the same port.

My guess is that the first time around the rsyslogd service was manually stopped and Filebeat was started up so things worked smoothly. But I suspect, upon restarting the AWS instance, the rsyslogd service started running as well and so Filebeat started complaining about the 514 port already being taken.

So I think what you'll need to do (without knowing all the details of your setup) is make sure the rsyslogd service is stopped and does not automatically start upon system restart.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.