Logstash not binding to port 5044

Hey everyone,

Been fiddling around getting ELK stack running over the last couple days but have had issues getting other machines on the local network to send filebeat logs to logstash due to port 5044 not listening.

I've managed to get Kibana dashboards set up and visualizing system logs, but am unable to get an SSL connection over 5044 due to Logstash not binding to the port.

Config files as below

**02-beats-input.conf**
input {
beats {
type => beats
host => "localhost"
port => 5044
ssl => true
ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
}
}

**10-syslog-filter.conf**
filter {
  if [type] == "syslog" {
    grok {
      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %
{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %
{GREEDYDATA:syslog_message}" }
      add_field => [ "received_at", "%{@timestamp}" ]
      add_field => [ "received_from", "%{host}" ]
    }
    syslog_pri { }
    date {
      match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
    }
  }
}

**30-elasticsearch-output**
output {
  elasticsearch {
    hosts => ["localhost:9200"]
    sniffing => true
    manage_template => false
    index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
    document_type => "%{[@metadata][type]}"
  }
}

**filebeat.yml**
############################# Filebeat ######################################
filebeat:
  prospectors:
    -
      paths:
        - /var/log/auth*
        - /var/log/secure
        - /var/log/messages

      input_type: log
      document_type: syslog

############################# Output 
##########################################

output:

  ### Logstash as output
  logstash:
    # The Logstash hosts
    hosts: ["127.0.0.1:5044"]
    bulk_max_size: 1024

    tls:
      certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]

Have you looked for clues in the Logstash log? What's the output of netstat -an | grep 5044?

Had yesterday off so didn't check on here.

my results for netstat -an | grep 5044 are as below

tcp        0      0 192.168.10.21:56926     192.168.10.21:5044      FIN_WAIT2
tcp6       0      0 :::5044                 :::*                    LISTEN
tcp6      54      0 192.168.10.21:5044      192.168.10.21:56926     CLOSE_WAIT

I had a look at the logs and it seems to be having trouble hitting Elasticsearch, would this cause a failure to bind 5044?

{:timestamp=>"2018-01-25T03:37:57.576000+1100", :message=>"Cannot get new connection from 
pool.", :class=>"Elasticsearch::Transport::Transport::Error", :backtrace=>
["/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-
1.0.15/lib/elasticsearch/transport/transport/base.rb:193:in `perform_request'", 
"/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-
1.0.15/lib/elasticsearch/transport/transport/http/manticore.rb:54:in `perform_request'", 
"/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-
1.0.15/lib/elasticsearch/transport/transport/sniffer.rb:32:in `hosts'", 
"org/jruby/ext/timeout/Timeout.java:147:in `timeout'", 
"/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-
1.0.15/lib/elasticsearch/transport/transport/sniffer.rb:31:in `hosts'", 
"/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-
1.0.15/lib/elasticsearch/transport/transport/base.rb:76:in `reload_connections!'", 
"/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-
java/lib/logstash/outputs/elasticsearch/http_client.rb:72:in `sniff!'", 
"/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-
java/lib/logstash/outputs/elasticsearch/http_client.rb:60:in `start_sniffing!'", 
"org/jruby/ext/thread/Mutex.java:149:in `synchronize'", 
"/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-
java/lib/logstash/outputs/elasticsearch/http_client.rb:60:in `start_sniffing!'", 
"org/jruby/RubyKernel.java:1479:in `loop'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-
output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/http_client.rb:59:in 
`start_sniffing!'"], :level=>:error}

If the pipeline has stalled because the outputs aren't processing messages then I believe the beats input will reject connections to apply backpressure, so yes, fix the ES problem first.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.