Going through the Logstash tutorial - "connection refused" error


(Josh A) #1

Hi all -- getting stuck during the Logstash tutorial on the Elastic site: https://www.elastic.co/guide/en/logstash/current/advanced-pipeline.html

I'm trying to use FileBeat to load the specified sample logfile into ES.

OS is Ubuntu 16.04 running in AWS. This is all running on the same host.

Here's my filebeat.yml in /etc/filebeat:
filebeat.prospectors: - input_type: log paths: - /tmp/logstash-tutorial.log output.logstash: hosts: ["localhost:5043"]

Here's my .conf file (/usr/share/logstash/first-pipeline.conf)
input { beats { port => "5043" } } filter { grok { match => { "message" => "%{COMBINEDAPACHELOG}"} } geoip { source => "clientip" } } output { elasticsearch { hosts => [ "localhost:9200" ] } }

Here are the uncommented lines from logstash.yml:
path.data: /var/lib/logstash path.config: /etc/logstash/conf.d

I'm using the following command to launch Filebeat:
sudo /usr/share/filebeat/bin/filebeat -e -c /etc/filebeat/filebeat.yml -d "publish"

This is the error that I get repeatedly:
2016/12/20 22:07:26.189472 output.go:109: DBG output worker: publish 100 events 2016/12/20 22:07:26.189766 single.go:140: ERR Connecting error publishing events (retrying): dial tcp 127.0.0.1:5043: getsockopt: connection refused 2016/12/20 22:07:27.190180 single.go:140: ERR Connecting error publishing events (retrying): dial tcp 127.0.0.1:5043: getsockopt: connection refused 2016/12/20 22:07:29.190543 single.go:140: ERR Connecting error publishing events (retrying): dial tcp 127.0.0.1:5043: getsockopt: connection refused 2016/12/20 22:07:33.190884 single.go:140: ERR Connecting error publishing events (retrying): dial tcp 127.0.0.1:5043: getsockopt: connection refused 2016/12/20 22:07:41.191232 single.go:140: ERR Connecting error publishing events (retrying): dial tcp 127.0.0.1:5043: getsockopt: connection refused 2016/12/20 22:07:51.185064 logp.go:230: INFO Non-zero metrics in the last 30s: libbeat.publisher.published_events=100 filebeat.harvester.running=1 filebeat.harvester.open_files=1 filebeat.harvester.started=1 2016/12/20 22:07:57.191579 single.go:140: ERR Connecting error publishing events (retrying): dial tcp 127.0.0.1:5043: getsockopt: connection refused

Can anyone see what I'm doing wrong?

I've seen some articles that suggest this is a mismatch on encryption between logstash and filebeat, but I'm not seeing any place where I would be enabling encryption.


(Josh A) #2

I saw an article that indicated that this may be a permissions issue.

I went through every directory referenced in these two pages:

And changed all to be owned by logstash:logstash. This didn't resolve the issue.

Also, I noticed that these two directories (mentioned on the filebeat page) do not exist:

  • data -- The location for persistent data files. -- /var/lib/filebeat
  • logs -- The location for the logs created by Filebeat. -- /var/log/filebeat

(Josh A) #3

I also tried changing the port (5045 instead of 5043), no luck.


(Steffen Siering) #4

Is logstash running?

Have you tried to ping and telnet your logstash host?


(Josh A) #5

Thanks for the response, Steffen.

Logstash is running.
~$ sudo service logstash status returns...

● logstash.service - logstash
   Loaded: loaded (/etc/systemd/system/logstash.service; disabled; vendor preset: enabled)
   Active: active (running) since Wed 2016-12-21 17:28:52 UTC; 1min 6s ago
 Main PID: 22166 (java)
    Tasks: 15
   Memory: 370.1M
      CPU: 15.960s
   CGroup: /system.slice/logstash.service
           └─22166 /usr/bin/java -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+Us

Dec 21 17:28:52 ip-172-31-10-197 systemd[1]: Started logstash.

I'm running both Logstash on Elk on the same server, so, yes, I can ping localhost.

I also checked ufw to see if it was blocking any ports, but it's disabled.


(Josh A) #6

I figured this out. It's an error in the tutorial from Elastic, which tells us to put the .conf file in the wrong path. I moved it to conf.d, and it worked fine.

This page (https://www.elastic.co/guide/en/logstash/current/advanced-pipeline.html) says:
"To get started, copy and paste the skeleton configuration pipeline into a file named first-pipeline.conf in your home Logstash directory."

It should read:
"To get started, copy and paste the skeleton configuration pipeline into a file named first-pipeline.conf in the conf.d directory under your home Logstash directory."

Anyone know how to get this to the website team to make the change?


(Mark Walkom) #7

It's not incorrect, the follow instruction states;

If the configuration file passes the configuration test, start Logstash with the following command:

bin/logstash -f first-pipeline.conf --config.reload.automatic

And because you are calling LS manually and pointing to the config it will work. If you are using your OSs service management, then yes, it won't work. But that's not the point of the steps :slight_smile:


(Josh A) #8

Thanks for the feedback, Mark - that's great.

I guess I'm still unclear on path structure for LS. I agree with you that the above instruction should be loading logstash using first-pipeline.conf from my LS home directory (/usr/share/logstash), but I wasn't able to get through this step in the process until I copied first-pipeline.conf to /etc/logstash/conf.d. As soon as I copied the file over, the pipeline worked fine.

Might be just me, though!

Thanks again.


(system) #9

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.