@Ben96
Thank you for your replies.
I'm posting this in hopes it helps someone else down the line.
After working with our support, the solution is to NOT specify the CA in the pipeline input and to NOT specify the HTTPS protocol in the Filebeat output. Communication to Logstash is not HTTPS (it's some other protocol leftover from Lumberjack) and there's something weird about specifying the CA in the input when it's not needed to authenticate the client.
Here's the working config:
filebeat.yml:
output.logstash: hosts: ["host.com:5044"] ssl: - certificate_authorities: ["/etc/pki/tls/CA/rootca.pem"] - certificate: "/etc/pki/tls/certs/certificate.crt" - key: "/etc/pki/tls/certs/key.key"
pipeline:
input { beats { host => "x.x.x.x" port => "5044" ssl => true ssl_certificate => "E:/Certs/certificate.crt" ssl_key => "E:/Certs/key.key" } } filter { grok { match => { "message" => "%{COMBINEDAPACHELOG}"} } } output { elasticsearch { hosts => ["https://host.com:9200"] user => "logstash_internal" password => "secret" ssl => true cacert => "E:\Certs\rootca.pem" } }
We now have SSL enabled all the way through, from the beats, to logstash, to elasticsearch (and kibana to elasticsearch as well)