Normally when I do the ELK installation I ask the firewall administrators to send the logs via port 514 TPC to the server I administer.
In the server what I do is that I modify the rsyslog.com file to open the 514 ports and at the end I add a line so that the logs that I detect coming in are stored in a folder with system-xxxxxx/YYYYY/MM/DD format.
Then by means of logstash yml file called panos.conf I specify the path of each one of the logs that I am receiving, this way I have worked without problem.
But in this case the admin of the firewall insists on sending the logs of all the firewalls through Panorama, the problem I have is that my server detects the IP of panorama or the host name and I only get 1 .log file that stores the logs of all the sources and I don't know if Logstash is able to process so much information.
What opinions do you have?
What do you recommend?
Are you moving to having multiple files to one single file on an existing system or is this a new system?
Personally I do not like this approach of receiving with syslog, writing to a file and reading from that file, I prefer to send the logs directly from the firewall to logstash using an UDP or TCP input as this will use less I/O and less CPU in my experience.
As an example, I have a couple of high rate network devices where I use the following infrastructure to receive the logs:
Device --> LB --> Logstash (2 nodes) --> Kafka Cluster --> Logstash (N nodes) --> Elasticsearch
But you could use something like this:
Device --> Logstash UDP/TCP --> Elasticsearch
I think that it would depend on the event rate.
I had some issues in the past where Logstash could not read the file fast enough, and then the file rotates and some events were lost or duplicated.
I stopped using Logstash to read files a couple of years, to read files I use Filebeat in some cases or vectordev from Datadog in other cases.
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.