# nmap localhost -p5044
Starting Nmap 6.40 ( http://nmap.org ) at 2016-11-05 15:32 EDT
Nmap scan report for localhost (127.0.0.1)
Host is up (-450s latency).
Other addresses for localhost (not scanned): 127.0.0.1
PORT STATE SERVICE
5044/tcp open unknown
Nmap done: 1 IP address (1 host up) scanned in 0.04 seconds
#
EOF means, the underlying TCP connection has been closed while sending or waiting for a response. Anything in logstash logs? Have you tried to run logstash with debug output? Can you share beats input configuration in logstash configuration files?
Let me know if you find anything. I am unable to send to logstash aswell. Sending beats directly to elasticsearch works fine! See my thread: Filebeat - first setup
checking your configs: This is not how TLS/SSL works. Using a public/private key infrastructure you need the certificate + private key for the server + some certificate authority (or self-signed certificate) for the client to verify the certificate from the server. Do not copy your private key on all your edge machines (it's called private for hopefully being kept secret).
On filebeat side use ssl, not tls (has been renamed since 5.0 releases)
only store the public key (I guess you're using a self-signed certificate) on machine filebeat is running on
I am not using any encryption. I managed to send the beats to logstash. In my logstash config, I outputed only to file instead of outputting to elasticsearch. So there seem to be something wrong in between logstash and elasticsearch. I will investigate further
My problem was that elasticsearch was unable to receive the logs. I had forgotten to make a data path in elasticsearch that elasticsearch user had write access to. I could see it in elasticsearch logs.
This behavior was weird to me since filebeat was able to send beats directly to elasticsearch on port 9200 without any problem. Idk where it saved then
By default filebeat stores events in filebeat-* index and logstash in logstash-*. Without having seen your config, I can't tell where you store the data too, though.
It's a good idea to get into SSL step by step. If unencrypted filebeat->logsthas is working now, let's introduce self-signed certificates.
We use this script to generate the self-signed certificate we use for automated integration tests. The /CN=.../ must match your logstash servers domain name so succeeds. Having confirmed self-signed certificate working, we can continue with certificate chains or let's encrypt next.
I re-installed filebeat (my /etc/filebeat/filebeat.yml was from older version) and now I have SSL vs TLS, however I'm still unable to make a connection from filebeat to logstash...
$ openssl s_client -connect localhost:5044
CONNECTED(00000003)
depth=0 CN = *
verify error:num=18:self signed certificate
verify return:1
depth=0 CN = *
verify return:1
---
Certificate chain
0 s:/CN=*
i:/CN=*
---
Server certificate
-----BEGIN CERTIFICATE-----
MIIC6zCCAdOgAwIBAgIJANPZwuf+5wTLMA0GCSqGSIb3DQEBCwUAMAwxCjAIBgNV
BAMMASowHhcNMTUxMjI4MTA0NTMyWhcNMjUxMjI1MTA0NTMyWjAMMQowCAYDVQQD
DAEqMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAp+jHFvhyYKiPXc7k
0c33f2QV+1hHNyW/uwcJbp5jG82cuQ41v70Z1+b2veBW4sUlDY3yAIEOPSUD8ASt
9m72CAo4xlwYKDvm/Sa3KJtDk0NrQiz6PPyBUFsY+Bj3xn6Nz1RW5YaP+Q1Hjnks
PEyQu4vLgfTSGYBHLD4gvs8wDWY7aaKf8DfuP7Ov74Qlj2GOxnmiDEF4tirlko0r
qQcvBgujCqA7rNoG+QDmkn3VrxtX8mKF72bxQ7USCyoxD4cWV2mU2HD2Maed3KHj
KAvDAzSyBMjI+qi9IlPN5MR7rVqUV0VlSKXBVPct6NG7x4WRwnoKjTXnr3CRADD0
4uvbQQIDAQABo1AwTjAdBgNVHQ4EFgQUVFurgDwdcgnCYxszc0dWMWhB3DswHwYD
VR0jBBgwFoAUVFurgDwdcgnCYxszc0dWMWhB3DswDAYDVR0TBAUwAwEB/zANBgkq
hkiG9w0BAQsFAAOCAQEAaLSytepMb5LXzOPr9OiuZjTk21a2C84k96f4uqGqKV/s
okTTKD0NdeY/IUIINMq4/ERiqn6YDgPgHIYvQheWqnJ8ir69ODcYCpsMXIPau1ow
T8c108BEHqBMEjkOQ5LrEjyvLa/29qJ5JsSSiULHvS917nVgY6xhcnRZ0AhuJkiI
ARKXwpO5tqJi6BtgzX/3VDSOgVZbvX1uX51Fe9gWwPDgipnYaE/t9TGzJEhKwSah
kNr+7RM+Glsv9rx1KcWcx4xxY3basG3/KwvsGAFPvk5tXbZ780VuNFTTZw7q3p8O
Gk1zQUBOie0naS0afype5qFMPp586SF/2xAeb68gLg==
-----END CERTIFICATE-----
subject=/CN=*
issuer=/CN=*
---
No client certificate CA names sent
Server Temp Key: ECDH, prime256v1, 256 bits
---
SSL handshake has read 1256 bytes and written 373 bytes
---
New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-GCM-SHA384
Server public key is 2048 bit
Secure Renegotiation IS supported
Compression: NONE
Expansion: NONE
SSL-Session:
Protocol : TLSv1.2
Cipher : ECDHE-RSA-AES256-GCM-SHA384
Session-ID: DAE27ADFB25F35A2939F15B632642EDFA20936CD55D181229099C9F885A651B3
Session-ID-ctx:
Master-Key: AE00CB74EF3D21BE0BB21429F73E6CDEED06509698912833F90F683FA7495161DFBC626183E2CC8E3BE25D2BF6C8EB89
Key-Arg : None
Krb5 Principal: None
PSK identity: None
PSK identity hint: None
Start Time: 1478622543
Timeout : 300 (sec)
Verify return code: 18 (self signed certificate)
---
closed
$
again, when I tried to use valid certificate in ssl.certificate_authorities, I'm getting following error:
ERR Connecting error publishing events (retrying): x509: certificate signed by unknown authority
If you're using a self signed certificate the self signed certificate must be added to the certificate authorities.
When doing changes, please also post configuration file changes + paths of your certificate files. It's somewhat difficult following your steps from prose alone.
e.g. The script posted by me stores the public certificate at pki/tls/certs/logstash.crt and the private key at pki/tls/private/logstash.key.
$ filebeat.sh -version
filebeat version 5.0.0 (amd64), libbeat 5.0.0
$
I had older version installed before and when I updated to version 5, my config file didn't get updated, so I was using older configuration file, but now I uninstalled it completely and removed /etc/filebeat as well and re-install package, so going forward I'm on version 5 and config from version 5 as well.
I'm seeing some data through Kibana, which made through filebeat -> logstash, however I'm still seeing errors:
2016/11/08 17:29:30.381217 sync.go:85: ERR Failed to publish events caused by: EOF
2016/11/08 17:29:30.381234 single.go:91: INFO Error publishing events (retrying): EOF
and also, I'm seeing this INFOs:
2016/11/08 17:35:40.365637 logp.go:230: INFO Non-zero metrics in the last 30s: libbeat.logstash.published_but_not_acked_events=1 publish.events=1 libbeat.logstash.publish.write_bytes=1875 libbeat.publisher.published_events=1 libbeat.logstash.publish.write_errors=1 libbeat.logstash.published_and_acked_events=1 libbeat.logstash.call_count.PublishEvents=2 libbeat.logstash.publish.read_bytes=1322 registrar.states.update=1 registrar.writes=1
2016/11/08 17:36:10.365648 logp.go:230: INFO Non-zero metrics in the last 30s: publish.events=2 libbeat.logstash.publish.write_bytes=845 registrar.states.update=2 libbeat.logstash.call_count.PublishEvents=1 libbeat.logstash.published_and_acked_events=2 libbeat.publisher.published_events=2 registrar.writes=1 libbeat.logstash.publish.read_bytes=35
Seems like connection has been closed by logstash, so filebeat has had to resend. Anything in logstash logs (have you tried logstash debug mode?) about closing connection or something something idle?
Filebeat has a default connection timeout of 30 seconds. But logstash can close connections if it thinks a connection is idle (older logstash also closes if logstash pipeline becomes congested).
I have logstash listening on many ports. Switching filebeat output from port 5042 to 5044 solved this issue for me.
Here is what I was seeing: 2016/11/22 18:23:38.288774 sync.go:85: ERR Failed to publish events caused by: EOF 2016/11/22 18:23:38.288803 single.go:91: INFO Error publishing events (retrying): EOF
Now the usual: 2016/11/22 18:23:59.663828 logp.go:230: INFO Non-zero metrics in the last 30s: libbeat.logstash.publish.read_bytes=3821 libbeat.publisher.published_events=4093 libbeat.logstash.published_and_acked_events=2045 libbeat.logstash.publish.write_bytes=161522 registrar.states.update=2048 registrar.writes=1 registar.states.current=2 filebeat.harvester.open_files=1 filebeat.harvester.started=1 libbeat.logstash.published_but_not_acked_events=2048 filebeat.harvester.running=1 libbeat.logstash.call_count.PublishEvents=2 libbeat.logstash.publish.read_errors=1 publish.events=2048
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.