HAProxy conf for Logstash

Hello,
I'm contacting you because I can't get the load balancing of Logstash via HAproxy to work properly.
I always have 2 Logstash out of the 4 that share the flow. I would like the flow to be fairly distributed between each Logstash node.

HAproxy conf

global
 log /dev/log local0

defaults
  timeout connect 3s
  timeout client 5s
  timeout server 5s
  log global
  mode tcp

  option tcplog
#https://www.haproxy.com/documentation/hapee/latest/onepage/#option%20tcplog
  
  balance roundrobin
#https://www.haproxy.com/documentation/hapee/latest/onepage/#balance

frontend logstash
  bind *:5044
  default_backend logstash_backend

backend logstash_backend
  server logstash01 logstash01.###:5044 check
  server logstash02 logstash02.###:5044 check
  server logstash03 logstash03.###:5044 check
  server logstash04 logstash04.###:5044 check

frontend elasticsearch
  bind *:9200
  default_backend elasticsearch_backend

backend elasticsearch_backend
  server elastic02 elastic02.###:9200 check
  server elastic03 elastic03.###:9200 check
  server elastic04 elastic04.###:9200 check
  server elastic05 elastic05.###:9200 check
  server elastic06 elastic06.###:9200 check

frontend elasticsearch2
  bind *:9300
  default_backend elasticsearch_backend2

backend elasticsearch_backend2
  server elastic02 elastic02.###:9300 check
  server elastic03 elastic03.###:9300 check
  server elastic04 elastic04.###:9300 check
  server elastic05 elastic05.###:9200 check
  server elastic06 elastic06.###:9200 check

For example, I left it running one night and I have a totally uneven distribution of ingested events.

node_name events ingested
logstash01 171.6m
logstash02 183m
logstash03 4.8m
logstash04 3.9m

Logstash pipeline

input {
  beats {
    port => 5044
    ssl => false
  }
}
###etc

Filebeat conf

# ------------------------------ Logstash Output -------------------------------
output.logstash:
  # The HAproxy loadbalancers
  hosts: ["beats.###:5044", "ansible.###:5044"]
  loadbalance: true
  worker: 2
  compression_level: 0

ELK stack 8.3.0

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.