Filebeat load balancing failing because of ssl certificate

I set up SSL for filebeat and logstash connections following this link.

Here i am creating a ssl key and certificate on a logstash server and copying that to filebeat clients box. And i am using that certificate to initiate a secure connection from filebeat to logstash. And its working good. But my issue here is , when i want to add one more logstash server and load balance in filebeat, its not working as the certificate is not validating new logstash instance. So i want to know how to create secure ssl based connections to two logstash instances with load balance in filebeat.

How to use multiple certificates for connecting to multiple logstash instances from filebeat using load balance?.

Do you use a different certificate for each LS server? Can't you share the same?

Certificates have some IP/domain name attached. By deault every LS server needs it's own certificate. That is, one either uses an CA and uses the CAs public certificate in filebeat or all LS server certificates must be available to beats.

For testing (not secure), on can disable the IP/domain name check when validating certificates (not recommended in production). See tls ouptut option insecure.

But i need to send logs to multiple logstash instances from filebeat in a secure way, I tried giving multiple certificates in certificates column, it didn't help as filebeat is shipping only to either one of them even when loadbalance : true.
Following is my configuration file

        - /path/logs
        hostip: "ipaddress"
      document_type: doctype

    hosts: ["host1:port","host2:port"]
    loadbalance : true
       certificate_authorities: ["certificate1.crt","certificate2.crt"]

  to_syslog: false
  to_files: true

    path: /var/log/filebeat
    name: filebeat.log
    rotateeverybytes: 10485760
    keepfiles: 7
    level: debug

Can you explain? Is it one of a) Filebeat is always shipping to host1, but never host2. After restart it's shipping to host2, but never host1? Or is it more like b) filebeat is sending to host1, pause and then send to host2.

Given filebeat can connect to both logstash instances + from your configs I'd assume it being case b) .

By default Filebeat has to wait for ACK from logstash before spooler can push another batch to logstash. The default spool_size is 2048 and the bulk_max_size in logstash output is 2048 too. Having spool_size = Host * Worker(=1) * bulk_max_size, will ensure the batch generated by the spooler is split into Host * Worker sub-batches that get push load-balanced. In this mode filebeat pushes to all logstash instances in lock-step (next batch only after all sub-batches have been ACKed). This mode keeps memory usage somewhat low. Alternatively one can enable publish_async (no change in spooler_size required) to prepare more batches in spooler to be send (they are automatically load-balanced).

There is some risk in current release of publish_async to deadlock if no logstash endpoint is available for some unspecified amount of time (happens by random). Related issue. It's already fixed and available in next release (I've no date yet, though).

It is case 1 for me, It's always pushing to host2 initially and when i stopped logstash instance in host2, and restarted it started pushing to host 1. I didit vice versa also. Basically it could push to both instances but not together. Its always pushing to either one of them.

filebeat version?

have you checked via netstat beats having 1 or 2 network connections? Anything in logs about failed send attempts?

You config file uses loadbalance : true instead of loadbalance: true. Maybe there is some problem with the yaml parser. Have you checked your config containing no tabs (YAML is somewhat sensitive).

I Checked with yaml parser and its loadbalance: true and there is no config error, usually if there is some yaml issue filebeat will fail saying yaml issue right?. And also i could see through netstat that filebeat is making connections to both the hosts. However always its sending to only box. And no errors in logs.

Thanks for response steffens, I found out the cause for this.
Its just that filebeat sends logs in bulk of 2048, And the file i gave has less than that many logs, so it made a chunk and send to one of those hosts. When i gave more logs, its distributing chunks to both the hosts.

1 Like

This topic was automatically closed after 21 days. New replies are no longer allowed.