Can't get filebeat to talk to logstash 5.0 via TLS/SSL - PrivateKeyConverter.generatePkcs8

The question:

Can someone help me figure out why I can't get filebeats to talk to logstash over TLS/SSL?

##The Error:
Here's the error observed in logstash.log (on github gist because I'd exceeded the 5000 chr limit):

##The Setup:

Servers

  • 2 servers.

$> uname -a
Linux elkserver 3.10.0-327.36.2.el7.x86_64 #1 SMP Mon Oct 10 23:08:37 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
$> cat /etc/*-release
CentOS Linux release 7.2.1511 (Core)

  • SELinux is Permissive (soz).
  • Firewalls are of. (mazza soz).
  • One server runs elasticsearch and logstash; one runs filebeat.

Elasticsearch

$> /usr/share/elasticsearch/bin/elasticsearch -version
Version: 2.4.1, Build: c67dc32/2016-09-27T18:57:55Z, JVM: 1.8.0_111

##logstash

$> /usr/share/logstash/bin/logstash -V
logstash 5.0.0-alpha5

###Filbeat

$> /usr/share/filebeat/bin/filebeat -version
filebeat version 5.0.0 (amd64), libbeat 5.0.0

Config:

  • Logstash
input {
  beats {
    port => 5044
	ssl => true
	ssl_certificate => "/etc/pki/tls/certs/filebeat-forwarder.crt"
	ssl_key => "/etc/pki/tls/private/filebeat-forwarder.key"
  }
}
output {
  elasticsearch {
	hosts => "localhost:9200"
	manage_template => false
	index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
	document_type => "%{[@metadata][type]}"
  }
}
  • Filebeat.yml
output:
 logstash:
   enabled: true
   hosts:
     - "<my ip address>:5044"
   timeout: 15
   tls:
     certificate_authorities:
     - /etc/pki/tls/certs/filebeat-forwarder.crt
filebeat:
 prospectors:
   -
     paths:
       - /var/log/syslog
       - /var/log/auth.log
     document_type: syslog
   -
     paths:
       - /var/log/nginx/access.log
     document_type: nginx-access
  • File: openssl_extras.cnf:

[req]
distinguished_name = req_distinguished_name
x509_extensions = v3_req
prompt = no

[req_distinguished_name]
C = TG
ST = Togo
L = Lome
O = Private company
CN = *

[v3_req]
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid,issuer
basicConstraints = CA:TRUE
subjectAltName = @alt_names

[alt_names]
DNS.1 = *
DNS.2 = .
DNS.3 = ..*
DNS.4 = ...
DNS.5 = ....*
DNS.6 = .....
DNS.7 = ......*
IP.1 = <my ip address>

The command used to create the cert:

$> openssl req -subj '/CN=elkserver.system.local/' -config /etc/pki/tls/openssl_extras.cnf
-x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout /etc/pki/tls/private/filebeat-forwarder.key
-out /etc/pki/tls/certs/filebeat-forwarder.crt

I'm encountering similar errors on logstash 5.0.0-alpha5 (things worked fine on v2.4). I truncated the error to comply with the 5,000 character limit on this forum.

 {
    "timestamp": "2016-11-01T11:33:55.204000+0000",
    "message": "Pipeline aborted due to error",
    "exception": {
        "cause": null,

With the stack-trace:

        "org.logstash.netty.PrivateKeyConverter.generatePkcs8(org/logstash/netty/PrivateKeyConverter.java:43)",
        "org.logstash.netty.PrivateKeyConverter.convert(org/logstash/netty/PrivateKeyConverter.java:39)",
        "java.lang.reflect.Method.invoke(java/lang/reflect/Method.java:498)",
        "RUBY.create_server(/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-beats-3.1.0.beta4-java/lib/logstash/inputs/beats.rb:139)",
        "RUBY.register(/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-beats-3.1.0.beta4-java/lib/logstash/inputs/beats.rb:132)",
        "RUBY.start_inputs(/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:311)",
        "org.jruby.RubyArray.each(org/jruby/RubyArray.java:1613)",
        "RUBY.start_inputs(/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:310)",
        "RUBY.start_workers(/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:187)",
        "RUBY.run(/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:145)",
        "RUBY.start_pipeline(/usr/share/logstash/logstash-core/lib/logstash/agent.rb:240)",
        "java.lang.Thread.run(java/lang/Thread.java:745)"

Again, the problem appears to be with the netty method PrivateKeyConverter.generatePkcs8

@Ryan_Grannell - thanks for posting. Good to know I'm not the only one. I don't know enough to debug the traceback properly annoyingly.

Is this something I/we should raise an issue/bug report for?

Can anyone suggest a course of action I can take to get some more info?

@Ryan_Grannell. Someone kindly provided an answer on Stack Overflow

I'm yet to try it but thought I'd share.

@robrant thanks for posting that link, it helped a little. I updated my configuration according to the advice given on StackOverflow, but unfortunately I'm still getting errors.

Config:

Logstash

beats {
	port                        => *****
	ssl                         => true
	ssl_certificate_authorities => ["/etc/logstash/beats.crt"]
	ssl_certificate             => "/etc/logstash/beats.crt"
	ssl_key                     => "/etc/logstash/beats.key"
	ssl_verify_mode             => "force_peer"
}

Filebeat

   # Filebeat
output:
  logstash:
    hosts: ["***"]
    ssl:
      certificate_authorities: [ "/etc/filebeat/beats.crt" ]
      certificate:             "/etc/filebeat/beats.crt"
      key:                     "/etc/filebeat/beats.key"

And the stack-trace

    "org.logstash.netty.PrivateKeyConverter.generatePkcs8(org/logstash/netty/PrivateKeyConverter.java:43)",
    "org.logstash.netty.PrivateKeyConverter.convert(org/logstash/netty/PrivateKeyConverter.java:39)",
    "java.lang.reflect.Method.invoke(java/lang/reflect/Method.java:498)",
    "RUBY.create_server(/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-beats-3.1.0.beta4-java/lib/logstash/inputs/beats.rb:139)",
    "RUBY.register(/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-beats-3.1.0.beta4-java/lib/logstash/inputs/beats.rb:132)",
    "RUBY.start_inputs(/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:311)",
    "org.jruby.RubyArray.each(org/jruby/RubyArray.java:1613)",
    "RUBY.start_inputs(/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:310)",
    "RUBY.start_workers(/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:187)",
    "RUBY.run(/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:145)",
    "RUBY.start_pipeline(/usr/share/logstash/logstash-core/lib/logstash/agent.rb:240)",
    "java.lang.Thread.run(java/lang/Thread.java:745)"

I think your suggestion to raise a bug-report was sound; I'll raise one now

Hold fire Ryan, I've got more to add in an hour when I'm back online.

1 Like

So, I've overcome that error. I've completely changed my config (which I'll post) and while that didn't explicitly fix the ruby error I wasn't going to be able to progress beyond it without fixing my config. The error (at least in my case) was caused by this versioning issue:

which links to this PR:

So... in the short term while this change hasn't made a release, here's what i did. @Ryan_Grannell - this might work for you if your filebeat and log stash config is already in good order.

  1. Check my version of the logstash beat plugin.

    $> sudo updatedb
    $> locate bin/logstash-plugin
    $> cd /usr/share/logstash/ # or wherever yours is installed
    $> bin/logstash-plugin install --verbose | grep beat

  2. Mine confirmed that I wasn't on 3.1.4 but the beta release. So, I installed the 3.1.4 version of the plugin.

    $> bin/logstash-plugin install --version 3.1.4 logstash-input-beats

  3. I think I had some weirdness where it continued to error after this install, but it was late so it might have just been me. Run the bin/logstash-plugin install --verbose command again just to make sure you've now got the right version.

I've now got an EOF error in the filbert log, so I'll be kicking around discuss.elastic.co for a while yet... :slight_smile:

1 Like

@robrant Thanks a lot! That worked, both filebeat and logstash started correctly.

I ran into the same EOF error as you, this github issue seems to track the problem.

I imagine myproblem is related to

2016/11/08 16:20:45.273578 single.go:77: INFO Error publishing events (retrying): read tcp 10.0.0.4:32864->****************:***************: read: connection reset by peer

in the output log

2016/11/08 16:20:42.659130 geolite.go:24: INFO GeoIP disabled: No paths were set under output.geoip.paths
2016/11/08 16:20:42.659794 logstash.go:106: INFO Max Retries set to: 3
2016/11/08 16:20:42.760011 outputs.go:126: INFO Activated logstash as output plugin.
2016/11/08 16:20:42.760080 publish.go:288: INFO Publisher name: *****************-ops-vm-logstash-jaw-0
2016/11/08 16:20:42.760276 async.go:78: INFO Flush Interval set to: 1s
2016/11/08 16:20:42.760302 async.go:84: INFO Max Bulk Size set to: 2048
2016/11/08 16:20:42.760337 beat.go:168: INFO Init Beat: filebeat; Version: 1.3.1
2016/11/08 16:20:42.760688 beat.go:194: INFO filebeat sucessfully setup. Start running.
2016/11/08 16:20:42.760731 registrar.go:68: INFO Registry file set to: /etc/filebeat/.filebeat
2016/11/08 16:20:42.760846 prospector.go:133: INFO Set ignore_older duration to 0s
2016/11/08 16:20:42.760869 prospector.go:133: INFO Set close_older duration to 1h0m0s
2016/11/08 16:20:42.760924 prospector.go:133: INFO Set scan_frequency duration to 10s
2016/11/08 16:20:42.760969 prospector.go:90: INFO Invalid input type set:
2016/11/08 16:20:42.761016 prospector.go:93: INFO Input type set to: log
2016/11/08 16:20:42.761073 prospector.go:133: INFO Set backoff duration to 1s
2016/11/08 16:20:42.761123 prospector.go:133: INFO Set max_backoff duration to 10s
2016/11/08 16:20:42.761176 prospector.go:113: INFO force_close_file is disabled
2016/11/08 16:20:42.761259 prospector.go:143: INFO Starting prospector of type: log
2016/11/08 16:20:42.761412 log.go:115: INFO Harvester started for file: /var/log/logstash/logstash.log
2016/11/08 16:20:42.761675 spooler.go:77: INFO Starting spooler: spool_size: 2048; idle_timeout: 5s
2016/11/08 16:20:42.762183 log.go:115: INFO Harvester started for file: /var/log/auth.log
2016/11/08 16:20:42.762313 crawler.go:78: INFO All prospectors initialised with 3 states to persist
2016/11/08 16:20:42.762355 registrar.go:87: INFO Starting Registrar
2016/11/08 16:20:42.762420 publish.go:88: INFO Start sending events to output
2016/11/08 16:20:42.763934 log.go:115: INFO Harvester started for file: /var/log/*****************/deployment.log
2016/11/08 16:20:45.273578 single.go:77: INFO Error publishing events (retrying): read tcp 10.0.0.4:32864->****************:***************: read: connection reset by peer
2016/11/08 16:20:45.273609 single.go:154: INFO send fail
2016/11/08 16:20:45.273619 single.go:161: INFO backoff retry: 1s
2016/11/08 16:20:46.382501 single.go:77: INFO Error publishing events (retrying): EOF
2016/11/08 16:20:46.382532 single.go:154: INFO send fail
2016/11/08 16:20:46.382632 single.go:161: INFO backoff retry: 2s
2016/11/08 16:20:48.490938 single.go:77: INFO Error publishing events (retrying): EOF
2016/11/08 16:20:48.490988 single.go:154: INFO send fail
2016/11/08 16:20:48.490999 single.go:161: INFO backoff retry: 4s
2016/11/08 16:20:52.557965 single.go:77: INFO Error publishing events (retrying): EOF
2016/11/08 16:20:52.558007 single.go:154: INFO send fail
2016/11/08 16:20:52.558019 single.go:161: INFO backoff retry: 8s