Load balanced access to fleet servers

I have setup 2 fleet servers successfully and want all additional elastic agents to connect to them via loadbalanced IP (haproxy with "round robin" option).
Whenever I configure the loadbalanced URL instead of the 2 fleet servers in the fleet settings in kibana, the checkin API call from the elastic agent fails with the following error.
If I use 2 entries with the 2 fleet server urls, everything is fine.

{"log.level":"error","@timestamp":"2022-03-21T12:15:27.195Z","log.origin":{"file.name":"fleet/fleet_gateway.go","file.line":205},"message":"Could not communicate with fleet-server Checking API will retry, error: fail to checkin to fleet-server: Post \"https://<my loadbalanced IP>:8220/api/fleet/agents/87908f75-c72f-4a73-9287-bea484589399/checkin?\": EOF","ecs.version":"1.6.0"}

I found this discussion, but it is not clear which scenario is supported at the moment (I am using version 7.16.2). I am trying to setup scenario 1.

Can anybody tell me if it is possible to use just one URL to connect to all fleet servers?
It seems like it is not possible at the moment or any elastic agent needs to connect always to the same fleet server?

Thanks in advance!


Hm... the error doesn't look like something is wrong with Fleet Server, but rather with your load balancing solution. Could you please try to configure any Nginx of HAProxy instead (temporarily) and make sure that load-balanced Fleet Servers communicate well with the agent?

When I call the api status via the loadbalanced address on haproxy, I get a healthy state.
I would assume this is enough to verify the connection, isn't it?

curl https://loadbalancedip:8220/api/status


This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.