Hello,
I'm running into SSL issues when trying to connect to logstash:
2016/05/16 20:44:56.611376 log.go:113: INFO Harvester started for file: /var/log/messages
2016/05/16 20:44:56.612388 spooler.go:77: INFO Starting spooler: spool_size: 2048; idle_timeout: 5s
2016/05/16 20:44:56.615841 log.go:113: INFO Harvester started for file: /var/log/secure
2016/05/16 20:44:56.615905 crawler.go:78: INFO All prospectors initialised with 2 states to persist
2016/05/16 20:44:56.615916 registrar.go:87: INFO Starting Registrar
2016/05/16 20:44:56.615932 publish.go:88: INFO Start sending events to output
2016/05/16 20:44:56.621501 client.go:100: DBG connect
2016/05/16 20:45:26.626260 transport.go:125: ERR SSL client failed to connect with: read tcp 192.168.2.102:53543->192.168.2.101:12345: i/o timeout
2016/05/16 20:45:26.626284 single.go:126: INFO Connecting error publishing events (retrying): read tcp 192.168.2.102:53543->192.168.2.101:12345: i/o timeout
2016/05/16 20:45:26.626295 single.go:152: INFO send fail
2016/05/16 20:45:26.626301 single.go:159: INFO backoff retry: 1s
2016/05/16 20:45:27.626558 client.go:100: DBG connect
2016/05/16 20:45:57.628309 transport.go:125: ERR SSL client failed to connect with: read tcp 192.168.2.102:53550->192.168.2.101:12345: i/o timeout
2016/05/16 20:45:57.628334 single.go:126: INFO Connecting error publishing events (retrying): read tcp 192.168.2.102:53550->192.168.2.101:12345: i/o timeout
2016/05/16 20:45:57.628339 single.go:152: INFO send fail
2016/05/16 20:45:57.628345 single.go:159: INFO backoff retry: 2s
2016/05/16 20:45:59.629454 client.go:100: DBG connect
2016/05/16 20:46:29.631021 transport.go:125: ERR SSL client failed to connect with: read tcp 192.168.2.102:53556->192.168.2.101:12345: i/o timeout
2016/05/16 20:46:29.631045 single.go:126: INFO Connecting error publishing events (retrying): read tcp 192.168.2.102:53556->192.168.2.101:12345: i/o timeout
2016/05/16 20:46:29.631050 single.go:152: INFO send fail
2016/05/16 20:46:29.631056 single.go:159: INFO backoff retry: 4s
2016/05/16 20:46:33.631147 client.go:100: DBG connect
2016/05/16 20:47:03.633512 transport.go:125: ERR SSL client failed to connect with: read tcp 192.168.2.102:53563->192.168.2.101:12345: i/o timeout
2016/05/16 20:47:03.633536 single.go:126: INFO Connecting error publishing events (retrying): read tcp 192.168.2.102:53563->192.168.2.101:12345: i/o timeout
2016/05/16 20:47:03.633541 single.go:152: INFO send fail
2016/05/16 20:47:03.633548 single.go:159: INFO backoff retry: 8s
2016/05/16 20:47:11.634561 client.go:100: DBG connect
2016/05/16 20:47:41.636067 transport.go:125: ERR SSL client failed to connect with: read tcp 192.168.2.102:53570->192.168.2.101:12345: i/o timeout
2016/05/16 20:47:41.636090 single.go:126: INFO Connecting error publishing events (retrying): read tcp 192.168.2.102:53570->192.168.2.101:12345: i/o timeout
2016/05/16 20:47:41.636095 single.go:152: INFO send fail
2016/05/16 20:47:41.636102 single.go:159: INFO backoff retry: 16s
2016/05/16 20:47:57.637345 client.go:100: DBG connect
Here is my filebeat config:
filebeat:
prospectors:
-
paths:
- /var/log/secure
- /var/log/messages
# - /var/log/*.log
input_type: log
document_type: syslog
scan_frequency: 1s
registry_file: /var/lib/filebeat/registry
output:
logstash:
hosts: ["logstash.ops.perka.com:12345"]
bulk_max_size: 1024
tls:
certificate_authorities: ["/etc/pki/tls/certs/filebeat/ssl.cert"]
shipper:
logging:
files:
rotateeverybytes: 10485760 # = 10MB
Here is my logstash config:
input {
beats {
port => 12345
ssl => true
ssl_certificate => "/etc/pki/tls/certs/filebeat/ssl.cert"
ssl_key => "/etc/pki/tls/certs/filebeat/ssl.key"
}
}
output {
stdout {
}
}
Any input is greatly appreciated.
Does Filebeat connect fine if you add the insecure: true
option to your Filebeat tls config?
If so, then you probably have a cert issue. You can check the docs on how to test the certs independently of Filebeat. https://www.elastic.co/guide/en/beats/filebeat/current/configuring-tls-logstash.html
Thank you for your speedy reply!
So, when I remove the tls option from my filebeat config, the connecting error is gone, but I get a new error: Error publishing events
2016/05/17 14:14:53.343347 log.go:113: INFO Harvester started for file: /var/log/messages
2016/05/17 14:14:53.343820 spooler.go:77: INFO Starting spooler: spool_size: 2048; idle_timeout: 5s
2016/05/17 14:14:53.346318 log.go:113: INFO Harvester started for file: /var/log/secure
2016/05/17 14:14:53.346363 crawler.go:78: INFO All prospectors initialised with 2 states to persist
2016/05/17 14:14:53.346373 registrar.go:87: INFO Starting Registrar
2016/05/17 14:14:53.346385 publish.go:88: INFO Start sending events to output
2016/05/17 14:16:23.369625 single.go:76: INFO Error publishing events (retrying): read tcp 192.168.2.102:54218->192.168.2.101:12345: i/o timeout
2016/05/17 14:16:23.369652 single.go:152: INFO send fail
2016/05/17 14:16:23.369659 single.go:159: INFO backoff retry: 1s
2016/05/17 14:17:54.377703 single.go:76: INFO Error publishing events (retrying): read tcp 192.168.2.102:54235->192.168.2.101:12345: i/o timeout
2016/05/17 14:17:54.377729 single.go:152: INFO send fail
2016/05/17 14:17:54.377740 single.go:159: INFO backoff retry: 2s
2016/05/17 14:19:26.395380 single.go:76: INFO Error publishing events (retrying): read tcp 192.168.2.102:54251->192.168.2.101:12345: i/o timeout
filebeat.yml:
filebeat:
prospectors:
-
paths:
- /var/log/secure
- /var/log/messages
# - /var/log/*.log
input_type: log
document_type: syslog
scan_frequency: 1s
registry_file: /var/lib/filebeat/registry
output:
logstash:
hosts: ["logstash.ops.perka.com:12345"]
bulk_max_size: 1024
shipper:
logging:
files:
rotateeverybytes: 10485760 # = 10MB
logstash.conf (I've removed the ssl options):
input {
beats {
port => 12345
}
}
output {
stdout {
}
}
When I add it, I get the original error:
2016/05/17 14:56:13.436030 transport.go:125: ERR SSL client failed to connect with: read tcp 192.168.2.102:54658->192.168.2.101:12345: i/o timeout
2016/05/17 14:56:13.436053 single.go:126: INFO Connecting error publishing events (retrying): read tcp 192.168.2.102:54658->192.168.2.101:12345: i/o timeout
2016/05/17 14:56:13.436065 single.go:152: INFO send fail
2016/05/17 14:56:13.436071 single.go:159: INFO backoff retry: 1s
filebeat.yml:
filebeat:
prospectors:
-
paths:
- /var/log/secure
- /var/log/messages
# - /var/log/*.log
input_type: log
document_type: syslog
scan_frequency: 1s
registry_file: /var/lib/filebeat/registry
output:
logstash:
hosts: ["logstash.ops.perka.com:12345"]
bulk_max_size: 1024
tls:
insecure: true
shipper:
logging:
files:
rotateeverybytes: 10485760 # = 10MB
Interesting, I was hoping that would allow it to connect which would indicate a certificate issue.
What output do you get if you run the curl test described in the docs from the Filebeat host?
This is the output:
curl -v --cacert /etc/pki/tls/certs/filebeat/ssl.cert logstash.ops.perka.com:12345
GET / HTTP/1.1
User-Agent: curl/7.29.0
Host: logstash.ops.perka.com:12345
Accept: /
Was that done with SSL enabled on the logstash beats input?
You should run it with the protocol in the URL so that curl knows to use negotiate a TLS connection. Like curl -v --cacert /etc/pki/tls/certs/filebeat/ssl.cert https://logstash.ops.perka.com:12345
Output from that command:
curl -v --cacert /etc/pki/tls/certs/filebeat/ssl.cert https://logstash.ops.perka.com:12345
About to connect() to logstash.ops.perka.com port 12345 (#0 )
Trying 192.168.2.101...
Connected to logstash.ops.perka.com (192.168.2.101) port 12345 (#0 )
Initializing NSS with certpath: sql:/etc/pki/nssdb
CAfile: /etc/pki/tls/certs/filebeat/ssl.cert
CApath: none
NSS error -5961 (PR_CONNECT_RESET_ERROR)
TCP connection reset by peer
Closing connection 0
curl: (35) TCP connection reset by peer
I see.. the handshake doesn't take place
Based on that output and the fact that Filebeat -> Logstash did not work when you completely disabled TLS, it seems like there are some connectivity issues using port 12345.
I was assume that if try to telnet to port 12345 that you get a "connection reset" too.
Check any firewalls you have between the hosts. It could be on the Filebeat host, the network in between, or on the Logstash host.
steffens
(Steffen Siering)
May 12, 2017, 11:27am
12
@rohan89 please create your own discussion instead of hijacking a very old one (maybe including docker
, kubernetes
in title). Please format logs/configs using the </>
button. Having to read unformatted YAML is kind of painful.