Logstash syslog fields not removed in index

I created a config file to ingest Cisco syslog output. When I run the config via command line (/usr/share/logstash/bin/logstash -f cisco.conf -r) everything works as expected. The fields I want show up properly in both stdout{} as well as in Kibana Discover. The problem arises when I put the conf file /etc/logstash/conf.d and restart the service. In addition to the fields I want, I am also getting the syslog fields showing up:

log.syslog.facility.name
log.syslog.facility.code
log.syslog.severity.name

"log": {
      "syslog": {
        "severity": {
          "name": "notice",
          "code": 5
        },
        "facility": {
          "name": "user-level",
          "code": 1
        }
      }
    },

I have an explicit mutate { remove_field => [ "log" ] } in the config which works on the command line, but not apparently when running as daemon. This hasn't happened with the other config that I have running for my Palo Alto firewall, so I'm rather confused as to why these fields are popping up in the index when they're seemingly being removed in the logstash config...

Anyone have any ideas why it would work differently between the CLI and daemon?

The gist link shows the config file (sanitized a bit), the referenced pattern file and the json output from the index when running in daemon mode (heavily redacted, but shows the fields)

I can live with the extra garbage if needed, but it's rather irritating not understanding why it's ignoring my remove_field entry when running in daemon mode...

Any help appreciated.

Extra bits in case it matters:
Ubuntu 22.04.1 LTS
elasticsearch/stable,now 8.6.1 amd64 [installed]
kibana/stable,now 8.6.1 amd64 [installed]
logstash/stable,now 1:8.6.1-1 amd64 [installed]

Doesn't make sense, maybe different IF is executed.
Can you move the mutate at the end

    }
   mutate { remove_field => [ "log" ] }
}

output {

Another option is to use the prune filter and black list

prune { blacklist_names => [ "log" ] }

Turns out it wasn't a bug or problem (per se) with my configuration file...

After seeing some additional screwy things (like my ciscologunknowntype.log file growing to 4G in a hurry), I learned that the conf.d directory with the default pipelines.yml file gets essentially merged into a single config rather than each conf file being treated separately. So my PaloAlto config and the Beats config apparently ended up merged with the Cisco config (thus all the "unknown" stuff from the Palo firewall feeding into the Cisco file output). Rather than trying to untangle the mess and hand merge everything I created separate entries in the pipelines.yml file and now it works like it should.

Still learning all the fun plumbing in elastic and logstash...

The gist link shows the config file (sanitized a bit), the referenced pattern file and the json output from the index when running in daemon mode (heavily redacted, but shows the fields)

router login
pikashow

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.