Wrong event start time with Fortinet module

Hi everyone,

I'm using Filebeat with its Fortinet module. In Kibana, I noticed that the fields event.start is completely wrong. It contains values such as "1970-01-19T12:01:51.846Z ".

I guess there is some parsing problem. Fortigate itself stores this value in epoch time so something like "1598520040".

Can someone else tell me how to fix that?

Hi @anon56147639,

Can you confirm which version of Filebeat you are running? The first iteration of our Fortinet module only supported milliseconds, but the latest version (7.9) now supports epoch. It seems that Fortinet changed format at some point on their side.

If you're already running 7.9, let me know and we can investigate further.

Hi @jamie.hynds,
thank you for your response. I was using 7.8.1. I just upgraded to 7.9 but it still shows the wrong event starting time.
So I tried to load the ingest nodes into ES again and I encountered the following error message:

root@graylog:~# filebeat setup --pipelines --modules fortinet
Exiting: 1 error: Error loading pipeline for fileset fortinet/firewall: couldn't load pipeline: couldn't load json. 
Error: 400 Bad Request: {"error":{"root_cause":[{"type":"parse_exception","reason":"processor [set] doesn't support one or more provided configuration 
parameters [ignore_empty_value]","processor_type":"set"}],"type":"parse_exception","reason":"processor [set] doesn't support one or 
more provided configuration parameters [ignore_empty_value]","processor_type":"set"},"status":400}. 
Response body: {"error":{"root_cause":[{"type":"parse_exception","reason":"processor [set] doesn't support one or more provided configuration
parameters [ignore_empty_value]","processor_type":"set"}],"type":"parse_exception","reason":"processor [set] doesn't support one or more provided 
configuration parameters [ignore_empty_value]","processor_type":"set"},"status":400}

I'm using the following setup: Filebeat->Logstash->ES.
Do you have any ideas how I could solve this?

The error is likely caused by a version mismatch between Filebeat and Elasticsearch. When you upgraded to 7.9, did that include upgrading Elasticsearch or just Filebeat? If so, could you upgrade Elasticsearch to 7.9 also - that should resolve the issue.

Hi @jamie.hynds,

I now updated ES as well and loaded the new ingest pipelines successfully but now I'm facing a dissect_parsing_error in Kibana. Basically the whole logfile is now stored in the event.original field and log.flags = dissect_parsing_error.

It looks like this:

    "event": {
      "original": "<189>date=2020-09-11 time=12:05:24 devname=\"Forti1\" devid=\"FG1K5D3I14802977\" logid=\"0000000013\" type=\"traffic\" subtype=\"forward\" level=\"notice\" vd=\"Core\" eventtime=1599818724 srcip=10.30.9.150 srcport=34063 srcintf=\"port18\" srcintfrole=\"undefined\" dstip=10.27.3.102 dstport=636 dstintf=\"port17\" dstintfrole=\"undefined\" poluuid=\"aaf93a00-d578-51e5-8fe3-2ad49b5f1337\" sessionid=1419306619 proto=6 action=\"server-rst\" policyid=32 policytype=\"policy\" service=\"tcp_636\" dstcountry=\"Reserved\" srccountry=\"Reserved\" trandisp=\"noop\" duration=6 sentbyte=1448 rcvdbyte=1048 sentpkt=14 rcvdpkt=12 appcat=\"unscanned\"",

Any ideas?

Hello @anon56147639

Would you be able to tell me the exact error you are getting? Does the events have a "error.message" field? Upon parsing errors we always log to that field, which would show a bit more about it.

If not, are you modifying the event in any way on Logstash before sending it further, and is Logstash configured to send to the new Pipeline?

Hi @Marius_Iversen,

There was no error message field. And I didn't modify the data in Logstash.

But I now went back to v7.8.1. because I had no parsing issues with that version apart from the wrong event_start I mentioned in the opening post.
I guess I will just leave it like that for now.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.