Fleet Policy different IP from the server

Hello guys!

My Fleet server have a private IP, but i need to recolect logs from a VM with only public IP, working with a PFSense, created a Port Forwarder to the Fleet Server private IP.

The issue is that the Fleet Agent enrollment works fine, but in the moment to recolect logs, Fleet Agent tries to send the logs to the private IP that is unreachable to him.

Actions: error validating Fleet client config: validating fleet client config: fail to communicate with Fleet Server API client hosts: all hosts failed: 1 error occurred: * requester 0/1 to host https://10.2.x.x:8220/ errored: Get "https://10.2.x.x:8220/api/status?": context deadline exceeded 

How can I create an Agent policy that configures Public IP?
Or I have to configure the host as a standalone host.

Counting on your replies.

Thanks in advance!

From Kibana to Elastic Agent

On the hosts urls of your fleet server host you will need to add the public endpoint so the agent will also try to checkin with fleet using the public endpoint.

So, you need to edit your Fleet Host server to add it.

You will probably need to reenroll your agent for it to get the new settings as it currently cannot communicate with the fleet server to get the updated policy.

Good Morning Leandro,

Thanks for the reply.

Now I added the Public IP to the Fleet Server and reroll the agent.
This is the current error, Actions: error validating Fleet client config: validating fleet client config: fail to communicate with Fleet Server API client hosts: all hosts failed: 2 errors occurred: * requester 0/2 to host https://privateIP:8220/ errored: Get "https://privateIP:8220/api/status?": context deadline exceeded * requester 1/2 to host https://publicIP:8220/ errored: Get "https://publicIP:8220/api/status?": context deadline exceeded.

After a few minutes the state changed from Unhealthy to Healthy but still no logs received. Used the same Agent Policy that I use in several hosts.

How can I fix this?

Thanks in advance!

Kind regards,

Did you add the public endpoint for your elasticsearch in the Elasticsearch output configuration as well?

You need to have both a public endpoint for Fleet and for Elasticsearch.

This is expected, since it cannot connect on the private IP, it will try the public IP, which worked since you said it changed from Unhealthy to Healthy.

Hello Leandro!

Yes, the Agent changed to Healthy, but no logs received to the Server.

Im checking the Fleet Settings and observed that my Elasticsearch Output only have private IP, couldn't edit to add the public as done on my Fleet Server.

The red mark is where my Public IP. As you see the default Output is blocked, tried to add a new one but how can i say to my Agent to use this Elastic Output?

Thanks in advance!

Kind regards,

Did you add this output in kibana.yml? Not sure why this would be locked besides being configured in kibana.yml.

Can you share your kibana.yml file?