Difference between number of fortigate firewall logs on Logstash and Elastic-agent managed by fleet

Hi there, we have a cluster of Elasticsearch and have shipped firewall (FortiGate) logs to Logstash, everything is going well and we have a huge number of logs about 3.5M logs in 15 minutes,
recently we decided to upgrade our cluster version and change our strategy in log monitoring via using elastic-agent and fleet server to use new features including integrations
so we set up another cluster on docker and composed files,
We have 3 Elasticsearch nodes on 3 separate VMs,
1 VM is used for Kibana and Fleet servers together,
1 VM for elastic-agent as a Syslog server

FortiGate logs with UDP connection are shipping to the Syslog server and through a policy with fortinet-fortigate-logs integration is indexing on the Elasticsearch

problem is: the number of logs in the first scenario I mean using Logstash in 15 min is about 3.5M
but the same logs from the same firewall on the second scenario (fleet-server) are less than half about 1M
I checked resource usage on nodes it's normal,
I didn't catch what happened to half of the logs. where they are dropping?
I tested another "index template" to skip pipeline processors and get original logs but my problem persists, could you please guide me to solving this problem?

What node you mean by this? Elasticsearch nodes or the Elastic Agent VM? You need to check the resources of the Elastic Agent VM.

Did you check the logs for the Elastic Agent?

It is not clear what you did here, can you provide more details? Skipping the integration pipeline you get all the logs?

Also, what are the resources for both your Logstash VM and your Elastic Agent VM? On Logstash are you using persistent queues?

I checked both VM resources (elasticsearch and elastic-agent),
elastic-agent VM has 10core CPU / 16G memory

I haven't checked the logs for elastic-agent, I don't know how can I check the logs on the elastic-agent VM directly.

Fortinet-FortiGate-log integration has a pipeline including some processors, I guessed maybe some logs dropped in the ingest pipeline and mapping processes



resource of my Logstash is 12core CPU / 32G memory

also on logstash I just set queue size in logstash pipeline

input {
    udp {
                   port => 5516
                   queue_size => 16384
                }
}

It's intriguing to see a significant drop in logs when transitioning to fleet-server for Elasticsearch. To troubleshoot, you might want to check if any log processing or filtering is occurring at the fleet-server level, impacting the log volume. Additionally, ensure your Elasticsearch mappings and indexing configurations are consistent between the two setups for accurate log ingestion. AC Football Cases

But is your logstash pipeline configured to use persistent queues? Please share the logstash.yml and pipelines.yml

But did you get more logs after changing the integration or not? I do not use this integration, I prefer to use Logstash for high rate logs like Firewalls, but it may be the case that the ingest pipeline is dropping some logs because of some errors.

It is a Linux VM? If so you can go to /opt/Elastic/Agent/data/elastic-agent-SOME_HASH/logs/, there you aill have the logs for the Elastic Agent.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.