I'm using Filebeat with its Fortinet module. In Kibana, I noticed that the fields event.start is completely wrong. It contains values such as "1970-01-19T12:01:51.846Z ".
I guess there is some parsing problem. Fortigate itself stores this value in epoch time so something like "1598520040".
Can you confirm which version of Filebeat you are running? The first iteration of our Fortinet module only supported milliseconds, but the latest version (7.9) now supports epoch. It seems that Fortinet changed format at some point on their side.
If you're already running 7.9, let me know and we can investigate further.
Hi @jamie.hynds,
thank you for your response. I was using 7.8.1. I just upgraded to 7.9 but it still shows the wrong event starting time.
So I tried to load the ingest nodes into ES again and I encountered the following error message:
root@graylog:~# filebeat setup --pipelines --modules fortinet
Exiting: 1 error: Error loading pipeline for fileset fortinet/firewall: couldn't load pipeline: couldn't load json.
Error: 400 Bad Request: {"error":{"root_cause":[{"type":"parse_exception","reason":"processor [set] doesn't support one or more provided configuration
parameters [ignore_empty_value]","processor_type":"set"}],"type":"parse_exception","reason":"processor [set] doesn't support one or
more provided configuration parameters [ignore_empty_value]","processor_type":"set"},"status":400}.
Response body: {"error":{"root_cause":[{"type":"parse_exception","reason":"processor [set] doesn't support one or more provided configuration
parameters [ignore_empty_value]","processor_type":"set"}],"type":"parse_exception","reason":"processor [set] doesn't support one or more provided
configuration parameters [ignore_empty_value]","processor_type":"set"},"status":400}
I'm using the following setup: Filebeat->Logstash->ES.
Do you have any ideas how I could solve this?
The error is likely caused by a version mismatch between Filebeat and Elasticsearch. When you upgraded to 7.9, did that include upgrading Elasticsearch or just Filebeat? If so, could you upgrade Elasticsearch to 7.9 also - that should resolve the issue.
I now updated ES as well and loaded the new ingest pipelines successfully but now I'm facing a dissect_parsing_error in Kibana. Basically the whole logfile is now stored in the event.original field and log.flags = dissect_parsing_error.
Would you be able to tell me the exact error you are getting? Does the events have a "error.message" field? Upon parsing errors we always log to that field, which would show a bit more about it.
If not, are you modifying the event in any way on Logstash before sending it further, and is Logstash configured to send to the new Pipeline?
There was no error message field. And I didn't modify the data in Logstash.
But I now went back to v7.8.1. because I had no parsing issues with that version apart from the wrong event_start I mentioned in the opening post.
I guess I will just leave it like that for now.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.