Hi,
We are almost in production and have one final function left, and it's fleet & elastic agent deployment so we can receive syslog from external source. For example CiscoFTD, Palo Alto etc.
I have followed the documentation to implement elastic-agents managed by fleet and it works perfect but it's only for monitoring the namespace/kubernetes cluster.
We want to be able to receive external traffic from network devices also but I can't find an example in the documentation or if somebody else have done it before?
From the Elastic Agent side you just need to add the Integration and configure a TCP or UDP port to receive the logs, you will use the same port in your device configuration.
What you need to make sure is that the container running your Elastic Agent can receive connections from outside on that port, but this is a network issue not an Elastic one.
No, the documentation didn't help actually. I can't find any examples for external sources to send logs to elastic agent that is in the pod/container on Openshift or Kubernetes.
I believe it's a elastic issue/problem since I have seen the same issue on multiple places and this is one of the greatest function that elastic have, so it would be nice to have atleast one example to get your customers up and running.
I have activated the integrations & I'm listening on a specific port right now (UDP 9001).
So the solution to this would be to create
Create a service
Loadbalancer with I believe Nodeport then? 9001:9001
I do not work for Elastic, I just volunteer on this forum, and while I agree that the documentation does not always help and it could have more examples, I don't think that this specific issue is a Elastic issue.
Basically to receive external data an agent needs to be able to listen on a port, so while configuring the agent you need to set the host as 0.0.0.0 to make it listen on all IP availables and choose any port.
For the agent it doesn't matter if you are running it on a VM, a bare metal server or a container, you just need to make sure that the external sources of data can reach that ip address and that port and this is not an Elastic issue, it is an Infrastructure and Network issue.
In this case, if you already have the agent listening on the container IP address on port 9001, then there is nothing else to configure on Elastic tools, everything else is a network/infrastructure issue.
I do not use Openshift, so I have no idea how you would configure this in Openshift, but you basically need to forward the connections to your openshift cluster to the container with the elastic agent on the specific port.
Thank you for the explanaition regarding elastic agent.
I took what you said and try to loadbalance elastic-agent pod with metallLB to expose the pods at port 9001 & 9003 (ciscoFTD and paloalto ports listening on 0.0.0.0).
I created a service and it seems like the elastic-agent is now expose at a specific IP-address. How can I confirm that the agent receives any logs? Will the data-view in kibana automatically get populated with logs?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.