Howto ingest from application logs and modules?

We have a system from which we would like to collect the regular system modules.
filebeat modules enable system and done. Great. Nothing is out of place.

But we have another application that requires an independent entry in filebeat.yml

filebeat.inputs:
- type: log
    enabled: true
    paths:
      - /var/spool/pbs/server_priv/accounting/*
    exclude_files: ['\.gz$']
    fields:
      pbs_accounting: true
    fields_under_root: true       

The output is set up for logstash in order to reap those non modularised logs.

But then I read that filebeat modules work better reporting directly to elasticsearch (admittedly an old post).

Then I read about setting the indices specifically in filebeat to take advantage of the kibana dashboards (again, an old post).

Both of these seem like good things to have - but the documentation specifically states that only a single output may be defined.

So now I'm confused.

My first question is can modules be used with non-modularised app's logs defined in filebeat.yml?

If they can't, then presumably I need to write the parser for the system logs in logstash? Something like this? Or are there more complex matches available? this is where we are swinging around to the real qy

filter {
  grok {
    match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" } 
  }
  date {
    match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]   
  }

This is where we are swinging around to the real question - I would love access to the fields listed as parsed in the system module but that simple match above isn't going to get me there.

Is there a way I can get the rich field set of the filebeat system module while still harvesting a non regular application's logs and sending them to logstash?

Oooooh. This is how I get the rich system parsing in logstash: use Logstash pipelines for parsing? Does it mean I need to set

fields:
  fileset.module => "system"

in order to get the parsing write in that filter?

Is there a way to set up filebeat so that modules go to a different port so I can utilize multiple pipeline config? IE, send filebeat defined modules to port 5045 and other logs to port 5044?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.