Syslog to elastic stack on kubernetes/openshift

We are almost in production and have one final function left, and it's fleet & elastic agent deployment so we can receive syslog from external source. For example CiscoFTD, Palo Alto etc.

I have followed the documentation to implement elastic-agents managed by fleet and it works perfect but it's only for monitoring the namespace/kubernetes cluster.

We want to be able to receive external traffic from network devices also but I can't find an example in the documentation or if somebody else have done it before?

So the ideal flow would be:

External syslog source -> [openshift cluster / elastic-agent] -> [openshift cluster / elasticsearch]

Fleet and the elastic agent is deployed in the same namespace and they have connection to each other no problems.

Could somebody please help out on this matter? Some examples, links or something to help us move forward and implement this.

Thank you!

Didn't the documentation help?

From the Elastic Agent side you just need to add the Integration and configure a TCP or UDP port to receive the logs, you will use the same port in your device configuration.

What you need to make sure is that the container running your Elastic Agent can receive connections from outside on that port, but this is a network issue not an Elastic one.

Hi @leandrojmp

No, the documentation didn't help actually. I can't find any examples for external sources to send logs to elastic agent that is in the pod/container on Openshift or Kubernetes.

I believe it's a elastic issue/problem since I have seen the same issue on multiple places and this is one of the greatest function that elastic have, so it would be nice to have atleast one example to get your customers up and running.

I have activated the integrations & I'm listening on a specific port right now (UDP 9001).

So the solution to this would be to create

  • Create a service

  • Loadbalancer with I believe Nodeport then? 9001:9001

Thank you.

I do not work for Elastic, I just volunteer on this forum, and while I agree that the documentation does not always help and it could have more examples, I don't think that this specific issue is a Elastic issue.

Basically to receive external data an agent needs to be able to listen on a port, so while configuring the agent you need to set the host as to make it listen on all IP availables and choose any port.

For the agent it doesn't matter if you are running it on a VM, a bare metal server or a container, you just need to make sure that the external sources of data can reach that ip address and that port and this is not an Elastic issue, it is an Infrastructure and Network issue.

In this case, if you already have the agent listening on the container IP address on port 9001, then there is nothing else to configure on Elastic tools, everything else is a network/infrastructure issue.

I do not use Openshift, so I have no idea how you would configure this in Openshift, but you basically need to forward the connections to your openshift cluster to the container with the elastic agent on the specific port.

Hi Leandrojmp,

Thank you for the explanaition regarding elastic agent.

I took what you said and try to loadbalance elastic-agent pod with metallLB to expose the pods at port 9001 & 9003 (ciscoFTD and paloalto ports listening on

I created a service and it seems like the elastic-agent is now expose at a specific IP-address. How can I confirm that the agent receives any logs? Will the data-view in kibana automatically get populated with logs?


The logs-* data view would show the data and you can check if the data streams are created looking in the index management/data streams on kibana.

1 Like

Hi @leandrojmp
I expose the elastic-agents as type: loadBalancer and we are getting the logs to elasticsearch.

However I believe it's more secure to use a ingress controller, for example NGINX ingress.

Will try this out and report back on how to receive logs to elastic-agent on openshift / k8s environments.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.