Using modules alongside "normal" log processing

Hi,
I have only been working on ELK for a month or so. I have recently upgraded my ELK stack (to 7.4.2) which is configured to use filebeat to read and parse logs via logstash to elasticsearch in a centralised log server. The elasticsearch cluster is operating on 3 RHEL nodes and is used to process a total of approx 2000 logs per day from 80 remote servers.

The logs include all the "standard" RHEL logs (eg sudo.log, sendmail.log etc) as well as other job specific logs (eg haproxy, nginx, tomcat etc).
At present, I have the filebeat.yml config set to process all logs using a glob of the entire log directory tree (date included in path) which creates a new index each day.
This is working fine and I can search through the data in Kibana without any problems, albeit with the default list of fields and herein lies my challenge. What I want to be able to access in Kibana, are the fields specific to processes like haproxy (eg response:200 etc) which are not part of the default field set ( at least I think that's the case).

I have enabled the haproxy module and restarted filebeat but I do not see any of the additional fields specific to haproxy when I now run a query. The haproxy documents returned on data ingested into the index after the filebeat restart still only contain the "generic" fields.

My question: Is there any specific filebeat.yml config required to have generic log processing work alongside processing performed through the use of modules. For example, do I need to exclude the haproxy.log files from the generic processing and if so, do I need to include any reference to the haproxy logs and their paths in the filebeat.yml file? I cannot find any reference to operating like this in documentation. Any assistance would be greatly appreciated.

I can answer part of this maybe. Module source file names/paths have defaults, in this case in /usr/share/filebeat/module/haproxy/log/manifest.yml.

Modules are designed to work with Elasticsearch ingest and it's easier to go with that if you can. If you want to go thru logstash, you need to convert the ingest pipelines to logstash, see this. However, some things don't convert and must be hand coded or use features that aren't available in Logstash.

I think you would want to exclude the haproxy logs from the general section, at least per this post, it seems like it could cause problems.

Good luck :slight_smile:

thanks @rugenl, from your answer I looked into converting haproxy (and nginx) module (in filebeat) output to logstash but that isn't available. I'm looking into doing this via grok in logstash now. Thanks again.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.