Hi I've recently started experimenting with Elastic Agent integrations. I've added one for Cisco ASA logs. The problem I'm facing is that my data is coming in with wrong timestamps. The data gets added into the indice with 1 timestamp for example: Dec 19, 2023 @ 13:11:17.00. Then after a while (seems very random) it a new timestamp is created and the data gets ingested with this timestamp. Of course as it is firewall logging I want to be able to have it work as a stream with correct data.
Anyone experienced this weird occurrence before and knows how to fix this?
Yes because the data doesn't matter and I'm not about to share our firewall data to the public.
As you can see the data coming in to elasticsearch only gets in at certain times let's say. So every bit of data gets assigned with the timestamp that you can see in the second part of the screenshot. Even if this data is being ingested at a later time (datastream). Then at some point in time the data that is getting ingested is assigned a new timestamp. That's why you can see all these gaps in the time graph on the first part. And then everything getting "assigned"/"ingested" with 1 timestamp. If I tcpdump on my elastic agent I do see it coming in as a stream and I also see the data of the indice increasing as a stream.
As mentioned I could not see any issue just from what you shared, redacting the entire message also does not allow to have any hint if you may have some parsing errors or not.
What is the time range you are using in Kibana discover? You may have some timezone issue depending on the timezone your firewall is configured.
What happens when you change the time range to Today or maybe 24 hours?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.