Logs still being shipped with filebeat purged

I attempted to use "exclude_lines:" and didn't have any success with filebeat v5.8.3

  • input_type: log
    paths:
    • /var/log/auth.log
    • /var/log/syslog
    • /var/log/ufw.log
    • /var/log/mail.log
      exclude_files: [".gz$"]
      exclude_lines: ['.bdb_equality_candidates.']

This:

processors:
drop_event.when.regexp:
system.auth.message: ['.bdb_equality_candidates.']

Log entries:

Aug 15 00:28:07 authv2 slapd[14027]: <= bdb_equality_candidates: (carLicense) not indexed

I tried this in the filebeat.yml config file and the syslog module. Tried to use a regex (Yes, it was correct) in the module syslog config file. I didn't have any luck from the examples I saw in the forums. I abandoned it, attempted to simply exclude the syslog file, logs are still being shipped, turned off filebeat, logs are still being shipped. Purged it from the system, logs are being shipped. All the logs are up to date and lines match.

The registry is purged. Filebeat isn't on the system and there is no other system producing these log entries and yet they are still showing up.

What is forwarding these log entries? Is there a lock, queue or something pushing these files out? Filebeat has been off this system for about 24 hours. Syslog isn't set to to push these logs independent of filebeat.

What is the output of which -a filebeat?

Nothing.

Apparently those entries were queued up in the logstash server. There were millions of lines still being added to the indices long after filebeat was off. Filebeat is very effective at grabbing and shipping those logs, logstash still had to process them. It took a long time to complete.

The exclude_lines prolly do work but wasn't acting on lines already shipped to logstash due to the backlog of lines to proccess.

Well, that's embarrassing.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.