Filebeat not fowarding to Logstash

I can't seem to get filebeat to send the logs to logstash. I'm trying to set up the elk stack on a remote ssh and visualize the logs in kibana. I've read some of the other posts, but still can't figure out the error. I have posted my filebeat.yml

> filebeat.prospectors:

- input_type: log

  paths:

    - /var/log/auth.log
   # - /var/log/*.log
    - /var/log/syslog

>   document_type: syslog


> output.logstash:

>     hosts: ["hostname:5044"]
>     bulk_max_size: 1024

>     tls:
>         certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]

My logstash.conf:

input {
  beats {
    port => 5044
    ssl => true
    ssl_certificate => "/etc/pki/tls/certs/logstash.crt"
    ssl_key => "/etc/pki/tls/private/logstash.key"

    ssl_verify_mode => "force_peer"
    ssl_certificate_authorities => ["/etc/pki/tls/certs/filebeat.crt"],
  }
}

...

output {
elasticsearch {
hosts => ["localhost:9200"]
sniffing => true
manage_template => false
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
}

I am receiving error when performing a tail on my filebeat:

2017-02-16T10:24:24-05:00 DBG Try to publish 1024 events to logstash with window size 1
2017-02-16T10:24:24-05:00 DBG handle error: read tcp elk_privateip:52790->elk_privateip:5044: read: connection reset by peer
2017-02-16T10:24:24-05:00 DBG 0 events out of 1024 events sent to logstash. Continue sending
2017-02-16T10:24:24-05:00 DBG close connection
2017-02-16T10:24:24-05:00 DBG closing
2017-02-16T10:24:24-05:00 ERR Failed to publish events caused by: read tcp elk_privateip->elk_privateip: read: connection reset by peer
2017-02-16T10:24:24-05:00 INFO Error publishing events (retrying): read tcp elk_privateip->elk_privateip: read: connection reset by peer
2017-02-16T10:24:24-05:00 DBG close connection

What Filebeat version are you using?

version 5.2

For 5.x the configuration options are named ssl and not tls. This was changed to be consistent across Elastic projects. See https://www.elastic.co/guide/en/beats/filebeat/current/configuration-output-ssl.html

ah, interesting, now when I try to restart filebeat...it throws an error loading the yaml: line 104: did not find expected key. But, I've commented out this section...

output.logstash:

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  ssl.certificate_authorities: ["/etc/pki/tls/certs/logstash.crt"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/tls/certs/filebeat.crt"

  # Client Certificate Key
  #ssl.key: "/etc/pki/tls/private/filebeat.key"

Do these need to match my logstash.conf inputs? I re edited my original post with my current logstash.conf

I don't see where you defined hosts: [xxx]. That's probably what it's complaining about with that error.

And you have commented out the ssl.certificate and ssl.key. Those will be needed since force_peer is used in Logstash.

Would this be correct?

output.logstash:
    # The Logstash hosts
        hosts: ["elkserver:5044"]


    bulk_max_size: 1024

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
    ssl.certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]

  # Certificate for SSL client authentication
  ssl.certificate: "/etc/pki/tls/certs/filebeat.crt"

  # Client Certificate Key
  ssl.key: "/etc/pki/tls/private/filebeat.key"

No. It's a YAML configuration file and in YAML indentation is very important.

output.logstash:
  hosts: ["elkserver:5044"]
  bulk_max_size: 1024
  ssl.certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]
  ssl.certificate: "/etc/pki/tls/certs/filebeat.crt"
  ssl.key: "/etc/pki/tls/private/filebeat.key"

Thanks, but it is still throwing an error. No events are being published

Please repost the complete configuration file. The </> button to format it.

logstash.conf

input {
  beats {
    port => 5044
    ssl => true  # enable TLS/SSL

    # configure logstash server certificate being presented to filebeat
    ssl_certificate => "/etc/pki/tls/certs/logstash.crt"
    ssl_key => "/etc/pki/tls/private/logstash.key"

    # configure client auth + filebeat cert for validation
    ssl_verify_mode => "force_peer",
    ssl_certificate_authorities => ["/etc/pki/tls/certs/filebeat.crt"],
  }
}


filter {
  if [type] == "syslog" {
    grok {
      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
      add_field => [ "received_at", "%{@timestamp}" ]
      add_field => [ "received_from", "%{host}" ]
    }
    syslog_pri { }
    date {
      match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
    }
  }
}

output {
  elasticsearch {
    hosts => ["localhost:9200"]
    sniffing => true
    manage_template => false
    index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
    document_type => "%{[@metadata][type]}"
  }
}

filebeat yaml:

filebeat.prospectors:

# Each - is a prospector. Most options can be set at the prospector level, so
# you can use different prospectors for various configurations.
# Below are the prospector specific configurations.

- input_type: log

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
   
    - /var/log/auth.log
   # - /var/log/*.log
    - /var/log/syslog
  
  document_type: syslog


#----------------------------- Logstash output --------------------------------
output.logstash:
# The Logstash hosts
    hosts: ["elkserver:5044"]
   
    bulk_max_size: 1024 
  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
    ssl.certificate_authorities: ["/etc/pki/tls/certs/logstash.crt"]
  
  # Certificate for SSL client authentication
    ssl.certificate: "/etc/pki/tls/certs/filebeat.crt"

  # Client Certificate Key
    ssl.key: "/etc/pki/tls/private/filebeat.key"
#================================ Logging =====================================

# Sets log level. The default log level is info.
# Available log levels are: critical, error, warning, info, debug
#logging.level: debug
logging.level: debug
logging.to_files: true
logging.to_syslog: false
logging.files:
path: /var/log/mybeat
name: mybeat.log
keepfiles: 7
# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
#logging.selectors: ["*"]

The logging configuration is invalid due to indentation. There is a example provided with the beat in filebeat.full.yml that shows the correct indentation.

logging.files:
  path: /var/log/mybeat
  name: mybeat.log
  keepfiles: 7

What's in the Filebeat log file?

tail -f /var/log/mybeat/mybeat.log

2017-02-16T15:24:26-05:00 DBG  Check file for harvesting: /var/log/auth.log
2017-02-16T15:24:26-05:00 DBG  Update existing file for harvesting: /var/log/auth.log, offset: 334917
2017-02-16T15:24:26-05:00 DBG  Harvester for file is still running: /var/log/auth.log
2017-02-16T15:24:26-05:00 DBG  Check file for harvesting: /var/log/syslog
2017-02-16T15:24:26-05:00 DBG  Update existing file for harvesting: /var/log/syslog, offset: 315276
2017-02-16T15:24:26-05:00 DBG  Harvester for file is still running: /var/log/syslog
2017-02-16T15:24:26-05:00 DBG  Prospector states cleaned up. Before: 2, After: 2
2017-02-16T15:24:29-05:00 DBG  connect
2017-02-16T15:24:29-05:00 ERR Connecting error publishing events (retrying): dial tcp server:5044: getsockopt: connection refused
2017-02-16T15:24:29-05:00 DBG  send fail
2017-02-16T15:24:36-05:00 DBG  Run prospector
2017-02-16T15:24:36-05:00 DBG  Start next scan
2017-02-16T15:24:36-05:00 DBG  Check file for harvesting: /var/log/auth.log
2017-02-16T15:24:36-05:00 DBG  Update existing file for harvesting: /var/log/auth.log, offset: 334917

tail -f /var/log/filebeat/filebeat

2017-02-16T14:42:52-05:00 INFO Stopping spooler
2017-02-16T14:42:52-05:00 DBG  Spooler has stopped
2017-02-16T14:42:52-05:00 DBG  Shutting down sync publisher
2017-02-16T14:42:52-05:00 INFO Stopping Registrar
2017-02-16T14:42:52-05:00 INFO Ending Registrar
2017-02-16T14:42:52-05:00 DBG  Write registry file: /var/lib/filebeat/registry
2017-02-16T14:42:52-05:00 DBG  Registry file updated. 0 states written.
2017-02-16T14:42:52-05:00 INFO Total non-zero values:  filebeat.harvester.closed=2 filebeat.harvester.started=2 libbeat.publisher.published_events=2046 registrar.writes=1
2017-02-16T14:42:52-05:00 INFO Uptime: 2.881754748s
2017-02-16T14:42:52-05:00 INFO filebeat stopped.

Also, if I run sudo service logstash configtest, I get:

sudo service logstash configtest

/etc/init.d/logstash: 156: /etc/init.d/logstash: /opt/logstash/bin/logstash: not found

It seem like there's a problem with the connection to Logstash. And based on the other output you pasted it seems like there's a problem with your Logstash installation.

Is there a way to do a clean uninstall. I'd like to start over, but am worried to delete everything. I just don't know how to debug the error at this point.

Am editing here becuase I have reached the max edits

It was installed by someone else and now I am accessing the remote server. I believe it was installed by a package apt-get

How did you install it?

I think now the problem is that my /opt/logstash/bin/logstash is not found so I am missing the binaries. My opt/logstash has no bin. The only path is /opt/logstash/vendor/bundle/jruby/1.9, therefore my init.d file is not connecting. Just not sure what needs to be fixed