Filebeat sends 20 days old logs

Hello,

I had approximately 12 hours of downtime on one server (RHEL8) but after it was online again, filebeat started to send logs from the time of downtime. Now, I can see logs in Kibana that have timestamp of the day it was sent to logstash, but in "original message" of that log, it shows date 20days back.

**Sep 27, 2023** @ 11:52:27.109 timestamp:2023-09-08 02:59:56.614
class:c.etiam.ppx.processors.ExceptionHandlingProcessor original_message:2023-09-08
02:59:56.614 [https-jsse-nio-7443-exec-13] ERROR []
c.etiam.ppx.processors.ExceptionHandlingProcessor - Exception occurred. Reason:
[{"timestamp":"2023-**09-08**T00:59:56.612+00:00","status":500,"error":"Internal Server
Error","path":"/auth-client/heartbeat"}]. Exception message: [HTTP operation failed
invoking https://tppxtrs1a.tm.local:8444/auth-client/heartbeat with statusCode: 500].
Filtered stacktrace: [] agent.type:filebeat host.name:tppxtrs1a host.hostname:tppxtrs1a
host.ip:172.25.36.81, fe80::250:56ff:fe86:edfc, 172.25.37.81, fe80::6140:6fc4:f6db:7c8f
input.type:log fields.env_ppx:tppx fields.document_type:trs level:ERROR msg:Exception
occurred. Reason:
[{"timestamp":"**2023-09-08**T00:59:56.612+00:00","status":500,"error":"Internal Server
Error","path":"/auth-client/heartbeat"}]. Exception message: [HTTP operation failed
invoking https://tppxtrs1a.tm.local:8444/auth-client/heartbeat with statusCode: 500].
Filtered stacktrace: [] tags:beats_input_codec_plain_applied, replaced, trs_parsed,
_dateparsefailure thread:https-jsse-nio-7443-exec-13 log.offset:3,912,044
log.file.path:/var/log/ppx/trs/application-trs.log @version:1 @timestamp**:Sep 27**, 2023 @
11:52:27.109 _id:bVML1ooBjUByS2T9b5JF _type:_doc _index:tppx-2023.39 _score: -

I tried restarting logstash and Filebeat, but it continues to send old logs even now, 20days after the downtime.
I want to stop filebeat from sending logs from that time of the downtime since those logs have no use right now, but they are flooding kibana right now. This happens only on one server. There were 2 servers that were down for few hours, but this problem is present only on one of them. Filebeat configuration and version is identical on all of them.

Filebeat is 7.17.0, there were no issues with ELK stack or filebeat but this.

I don't know if I should clear metadata of filebeat because I am afraid it will try to read again from log files from the beginning of the file and flood ELK with old logs or perhaps create duplicate logs.

Is there anything else I could try to not make permanent changes to filebeat configuration ?

Thank you.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.