A big trouble about from filebeat to logstash use SSL to transition

dear all:
i have a big trouble about two weeks
when i install the plugin x-pack kibana & elasticsearch & logstash
between kibana and elasticsearch linked to use username/password
between logstash and elasticsearch lined to use username/password
between filebeat and logstash linked to use SSL but result failed

logstash service is normal running have not error log
bug filebeat has some error

filebeat error log info info :

2017/04/17 06:15:33.530280 single.go:140: ERR Connecting error publishing events (retrying): dial tcp 218.241.115.114:5043: getsockopt: connection refused ##logstash no start

ERR Failed to publish events caused by: read tcp 218.241.115.71:34914->218.241.115.114:5043: read: connection reset by peer

2017/04/17 06:19:29.782544 single.go:91: INFO Error publishing events (retrying): read tcp x.x.x.x:34904->y.y.y.y:5043: read: connection reset by peer
2017/04/17 06:19:56.492100 metrics.go:39: INFO Non-zero metrics in the last 30s: libbeat.logstash.call_count.PublishEvents=1 libbeat.logstash.publish.read_errors=1 libbeat.logstash.publish.write_bytes=265 libbeat.logstash.published_but_not_acked_events=1028

below logstash config

input {
beats {
port => "5043"
ssl => true
ssl_certificate_authorities => ["/home/ops/server_cert/cacert.pem"]
ssl_certificate => "/home/ops/server_cert/newcert.pem"
ssl_key => "/home/ops/server_cert/newkey.pkcs8"
ssl_verify_mode =>"force_peer"
key_passpharse => "1234"
}
}
filter {
grok {
match => { "message" => "%{TIMESTAMP_ISO8601:datatime} %{LOGLEVEL:loglevel} - %{IP:client} %{NUMBER:duration} %{WORD:qtype} %{DATA:domain} %{NUMBER:nb1} %{NUMBER:nb2} " }
}
}
output {
elasticsearch {
hosts => ["1.1.1.1:9200","1.1.1.2:9200"]
index => "logstash-%{[fields][logtype]}-%{+YYYY.MM.dd}"
user => "elastic"
password => "abc123"
}
}

below filebeat config

filebeat.prospectors:

  • input_type: log
    paths:
    • /home/ops/nginx/logs/server-perf.log
      exclude_lines: ["DEBUG -"]
      fields:
      logtype: "servernginx"

output.logstash:
hosts: ["1.1.1.2:5043"]
protocol: https
supported_protocols: SSLv3
sls.certificate_authorities: ["/home/ops/client_cert/cacert.pem"]
sls.certificate: "/home/ops/client_cert/newcert.pem"
sls.key : "/home/ops/client_cert/newkey.pkcs8"
key_passpharse: "1234"

logging.level: "debug"

You might have a typo: sls.certificate instead of ssl.certificate, sls.key instead of ssl.key etc.

Thank you @tudor

when i set the hostname logstash01 for logstash server , add filebeat server hosts file content :"1.1.1.2 logstash01"

and the certificate common name set logstash01 for logstash server

bellow filebeat debug log info:
2017/04/18 03:21:42.864263 metrics.go:39: INFO Non-zero metrics in the last 30s: filebeat.harvester.open_files=1 filebeat.harvester.running=1 filebeat.harvester.started=1 libbeat.publisher.published_events=1040 registrar.writes=1
2017/04/18 03:21:44.151372 single.go:140: ERR Connecting error publishing events (retrying): x509: cannot validate certificate for 1.1.1.2 because it doesn't contain any IP SANs

how analysis history log file not on time because out cluster has crashed two weeks i think that i want make up history log file into ELK cluster

With SSL/TLS the servers certificate will hold the IP/Domain-name, the certificate was signed for. The client will compare the certificates IP/Domain with the address used to connect the beat with logstash. Having a hostname in the certificate, but configure an IP in beats won't work.

Why SSLv3? This is so outdated. I'd recommend TLS 1.2 (if supported by java version used to run logstash).

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.