Filebeat - ERR Failed to publish events caused by: EOF

I'm using Elasticsearch, Logstash, Kibana (ELK) Docker image (elk-docker) @ following environment:

# cat /etc/redhat-release 
CentOS Linux release 7.2.1511 (Core) 
# uname -a
Linux X 3.10.0-327.36.3.el7.x86_64 #1 SMP Mon Oct 24 16:09:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
# rpm -q filebeat
filebeat-5.0.0-1.x86_64
# 

and getting following error while trying to run filebeat (output to logstash):

2016/11/05 19:17:34.996020 sync.go:85: ERR Failed to publish events caused by: EOF
2016/11/05 19:17:34.996044 single.go:91: INFO Error publishing events (retrying): EOF

filebeat.sh -httpprof localhost:6060

# nmap localhost -p5044

Starting Nmap 6.40 ( http://nmap.org ) at 2016-11-05 15:32 EDT
Nmap scan report for localhost (127.0.0.1)
Host is up (-450s latency).
Other addresses for localhost (not scanned): 127.0.0.1
PORT     STATE SERVICE
5044/tcp open  unknown

Nmap done: 1 IP address (1 host up) scanned in 0.04 seconds
# 

filebeat.yml:

$ grep -vE '^?#' /etc/filebeat/filebeat.yml | grep -E ':|-'
filebeat.prospectors:
- input_type: log
  paths:
    - /var/log/all.log
output.logstash:
  hosts: ["localhost:5044"]
$ 

Please advice.

EOF means, the underlying TCP connection has been closed while sending or waiting for a response. Anything in logstash logs? Have you tried to run logstash with debug output? Can you share beats input configuration in logstash configuration files?

Yes, see following

or the actual error is:

[2016-11-05T20:44:42,999][ERROR][org.logstash.beats.BeatsHandler] Exception: not an SSL/TLS record: X

it looks like something to do with connection (SSL/TLS) rather then input configuration...

Looks like you configured SSL/TLS on logstash, but not on beats.

I tried to enable AND disable on beat side (default is off), yet I'm still getting same error from logstash.

logstash: ssl set to "true"

$ cat 02-beats-input.conf 
input {
  beats {
    port => 5044
    ssl => true
    ssl_certificate => "/etc/pki/tls/certs/logstash-beats.crt"
    ssl_key => "/etc/pki/tls/private/logstash-beats.key"
  }
}
$ 

filebeat: ssl is on

$ grep -vE '^?#' /etc/filebeat/filebeat.yml | grep -E ':|-'
filebeat.prospectors:
- input_type: log
  paths:
    - /var/log/all.log
output.logstash:
  hosts: ["localhost:5044"]
$

Let me know if you find anything. I am unable to send to logstash aswell. Sending beats directly to elasticsearch works fine! See my thread: Filebeat - first setup

I'm trying to get it going without any customization and unable to do so... feel free to adjust notifications to stay in loop)

checking your configs: This is not how TLS/SSL works. Using a public/private key infrastructure you need the certificate + private key for the server + some certificate authority (or self-signed certificate) for the client to verify the certificate from the server. Do not copy your private key on all your edge machines (it's called private for hopefully being kept secret).

  1. On filebeat side use ssl, not tls (has been renamed since 5.0 releases)
  2. only store the public key (I guess you're using a self-signed certificate) on machine filebeat is running on
  3. configure output.logstash.ssl.certificate_authorities

Personally I'd recommend/prefer certificate chains with intermediate CAs instead of self-signed certificates.

1 Like

@steffens

  1. I'm using - filebeat version 5.0.0 (amd64), libbeat 5.0.0.
  2. I tried to use Let's Encrypt (and self-signed certificate) in filebeat.yml
  3. I tried to use Chain of Trust as ssl.certificate_authorities and got different error:

ERR Connecting error publishing events (retrying): x509: certificate signed by unknown authority

I am not using any encryption. I managed to send the beats to logstash. In my logstash config, I outputed only to file instead of outputting to elasticsearch. So there seem to be something wrong in between logstash and elasticsearch. I will investigate further :slight_smile:

My problem was that elasticsearch was unable to receive the logs. I had forgotten to make a data path in elasticsearch that elasticsearch user had write access to. I could see it in elasticsearch logs.

This behavior was weird to me since filebeat was able to send beats directly to elasticsearch on port 9200 without any problem. Idk where it saved then

By default filebeat stores events in filebeat-* index and logstash in logstash-*. Without having seen your config, I can't tell where you store the data too, though.

It's a good idea to get into SSL step by step. If unencrypted filebeat->logsthas is working now, let's introduce self-signed certificates.

We use this script to generate the self-signed certificate we use for automated integration tests. The /CN=.../ must match your logstash servers domain name so succeeds. Having confirmed self-signed certificate working, we can continue with certificate chains or let's encrypt next.

I re-installed filebeat (my /etc/filebeat/filebeat.yml was from older version) and now I have SSL vs TLS, however I'm still unable to make a connection from filebeat to logstash...

$ openssl s_client -connect localhost:5044
CONNECTED(00000003)
depth=0 CN = *
verify error:num=18:self signed certificate
verify return:1
depth=0 CN = *
verify return:1
---
Certificate chain
 0 s:/CN=*
   i:/CN=*
---
Server certificate
-----BEGIN CERTIFICATE-----
MIIC6zCCAdOgAwIBAgIJANPZwuf+5wTLMA0GCSqGSIb3DQEBCwUAMAwxCjAIBgNV
BAMMASowHhcNMTUxMjI4MTA0NTMyWhcNMjUxMjI1MTA0NTMyWjAMMQowCAYDVQQD
DAEqMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAp+jHFvhyYKiPXc7k
0c33f2QV+1hHNyW/uwcJbp5jG82cuQ41v70Z1+b2veBW4sUlDY3yAIEOPSUD8ASt
9m72CAo4xlwYKDvm/Sa3KJtDk0NrQiz6PPyBUFsY+Bj3xn6Nz1RW5YaP+Q1Hjnks
PEyQu4vLgfTSGYBHLD4gvs8wDWY7aaKf8DfuP7Ov74Qlj2GOxnmiDEF4tirlko0r
qQcvBgujCqA7rNoG+QDmkn3VrxtX8mKF72bxQ7USCyoxD4cWV2mU2HD2Maed3KHj
KAvDAzSyBMjI+qi9IlPN5MR7rVqUV0VlSKXBVPct6NG7x4WRwnoKjTXnr3CRADD0
4uvbQQIDAQABo1AwTjAdBgNVHQ4EFgQUVFurgDwdcgnCYxszc0dWMWhB3DswHwYD
VR0jBBgwFoAUVFurgDwdcgnCYxszc0dWMWhB3DswDAYDVR0TBAUwAwEB/zANBgkq
hkiG9w0BAQsFAAOCAQEAaLSytepMb5LXzOPr9OiuZjTk21a2C84k96f4uqGqKV/s
okTTKD0NdeY/IUIINMq4/ERiqn6YDgPgHIYvQheWqnJ8ir69ODcYCpsMXIPau1ow
T8c108BEHqBMEjkOQ5LrEjyvLa/29qJ5JsSSiULHvS917nVgY6xhcnRZ0AhuJkiI
ARKXwpO5tqJi6BtgzX/3VDSOgVZbvX1uX51Fe9gWwPDgipnYaE/t9TGzJEhKwSah
kNr+7RM+Glsv9rx1KcWcx4xxY3basG3/KwvsGAFPvk5tXbZ780VuNFTTZw7q3p8O
Gk1zQUBOie0naS0afype5qFMPp586SF/2xAeb68gLg==
-----END CERTIFICATE-----
subject=/CN=*
issuer=/CN=*
---
No client certificate CA names sent
Server Temp Key: ECDH, prime256v1, 256 bits
---
SSL handshake has read 1256 bytes and written 373 bytes
---
New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-GCM-SHA384
Server public key is 2048 bit
Secure Renegotiation IS supported
Compression: NONE
Expansion: NONE
SSL-Session:
    Protocol  : TLSv1.2
    Cipher    : ECDHE-RSA-AES256-GCM-SHA384
    Session-ID: DAE27ADFB25F35A2939F15B632642EDFA20936CD55D181229099C9F885A651B3
    Session-ID-ctx: 
    Master-Key: AE00CB74EF3D21BE0BB21429F73E6CDEED06509698912833F90F683FA7495161DFBC626183E2CC8E3BE25D2BF6C8EB89
    Key-Arg   : None
    Krb5 Principal: None
    PSK identity: None
    PSK identity hint: None
    Start Time: 1478622543
    Timeout   : 300 (sec)
    Verify return code: 18 (self signed certificate)
---
closed
$ 

again, when I tried to use valid certificate in ssl.certificate_authorities, I'm getting following error:

ERR Connecting error publishing events (retrying): x509: certificate signed by unknown authority

and with self-signed where do I get ca.pem?

So which filebeat version have you installed now?

If you're using a self signed certificate the self signed certificate must be added to the certificate authorities.

When doing changes, please also post configuration file changes + paths of your certificate files. It's somewhat difficult following your steps from prose alone.

e.g. The script posted by me stores the public certificate at pki/tls/certs/logstash.crt and the private key at pki/tls/private/logstash.key.

Now logstash is configured to use SSL via:

input {
    beats {
      port => 5055
      ssl => true
      ssl_certificate => "/etc/pki/tls/certs/logstash.crt"
      ssl_key => "/etc/pki/tls/private/logstash.key"
  }
}

The certificate file is passed to filebeat (I'm using version 5.0 here):

output.logstash:
  hosts: ["logstash:5055"]
  ssl.certificate_authorities: ["/etc/pki/tls/certs/logstash.crt"]
$ filebeat.sh -version
filebeat version 5.0.0 (amd64), libbeat 5.0.0
$ 

I had older version installed before and when I updated to version 5, my config file didn't get updated, so I was using older configuration file, but now I uninstalled it completely and removed /etc/filebeat as well and re-install package, so going forward I'm on version 5 and config from version 5 as well.

using following configuration:

$ grep -vE '^?#' filebeat.yml | grep -E ':|-'
filebeat.prospectors:
- input_type: log
  paths:
    - /var/log/all.log
output.logstash:
  hosts: ["localhost:5044"]
  ssl.certificate_authorities: ["/etc/pki/tls/certs/logstash-beats.crt"]
$ 

I'm seeing some data through Kibana, which made through filebeat -> logstash, however I'm still seeing errors:

2016/11/08 17:29:30.381217 sync.go:85: ERR Failed to publish events caused by: EOF
2016/11/08 17:29:30.381234 single.go:91: INFO Error publishing events (retrying): EOF

and also, I'm seeing this INFOs:

2016/11/08 17:35:40.365637 logp.go:230: INFO Non-zero metrics in the last 30s: libbeat.logstash.published_but_not_acked_events=1 publish.events=1 libbeat.logstash.publish.write_bytes=1875 libbeat.publisher.published_events=1 libbeat.logstash.publish.write_errors=1 libbeat.logstash.published_and_acked_events=1 libbeat.logstash.call_count.PublishEvents=2 libbeat.logstash.publish.read_bytes=1322 registrar.states.update=1 registrar.writes=1
2016/11/08 17:36:10.365648 logp.go:230: INFO Non-zero metrics in the last 30s: publish.events=2 libbeat.logstash.publish.write_bytes=845 registrar.states.update=2 libbeat.logstash.call_count.PublishEvents=1 libbeat.logstash.published_and_acked_events=2 libbeat.publisher.published_events=2 registrar.writes=1 libbeat.logstash.publish.read_bytes=35

Seems like connection has been closed by logstash, so filebeat has had to resend. Anything in logstash logs (have you tried logstash debug mode?) about closing connection or something something idle?

Filebeat has a default connection timeout of 30 seconds. But logstash can close connections if it thinks a connection is idle (older logstash also closes if logstash pipeline becomes congested).

I have logstash listening on many ports. Switching filebeat output from port 5042 to 5044 solved this issue for me.

Here is what I was seeing:
2016/11/22 18:23:38.288774 sync.go:85: ERR Failed to publish events caused by: EOF 2016/11/22 18:23:38.288803 single.go:91: INFO Error publishing events (retrying): EOF

Now the usual:
2016/11/22 18:23:59.663828 logp.go:230: INFO Non-zero metrics in the last 30s: libbeat.logstash.publish.read_bytes=3821 libbeat.publisher.published_events=4093 libbeat.logstash.published_and_acked_events=2045 libbeat.logstash.publish.write_bytes=161522 registrar.states.update=2048 registrar.writes=1 registar.states.current=2 filebeat.harvester.open_files=1 filebeat.harvester.started=1 libbeat.logstash.published_but_not_acked_events=2048 filebeat.harvester.running=1 libbeat.logstash.call_count.PublishEvents=2 libbeat.logstash.publish.read_errors=1 publish.events=2048

This topic was automatically closed after 21 days. New replies are no longer allowed.