I have only been working on ELK for a month or so. I have recently upgraded my ELK stack (to 7.4.2) which is configured to use filebeat to read and parse logs via logstash to elasticsearch in a centralised log server. The elasticsearch cluster is operating on 3 RHEL nodes and is used to process a total of approx 2000 logs per day from 80 remote servers.
The logs include all the "standard" RHEL logs (eg sudo.log, sendmail.log etc) as well as other job specific logs (eg haproxy, nginx, tomcat etc).
At present, I have the filebeat.yml config set to process all logs using a glob of the entire log directory tree (date included in path) which creates a new index each day.
This is working fine and I can search through the data in Kibana without any problems, albeit with the default list of fields and herein lies my challenge. What I want to be able to access in Kibana, are the fields specific to processes like haproxy (eg response:200 etc) which are not part of the default field set ( at least I think that's the case).
I have enabled the haproxy module and restarted filebeat but I do not see any of the additional fields specific to haproxy when I now run a query. The haproxy documents returned on data ingested into the index after the filebeat restart still only contain the "generic" fields.
My question: Is there any specific filebeat.yml config required to have generic log processing work alongside processing performed through the use of modules. For example, do I need to exclude the haproxy.log files from the generic processing and if so, do I need to include any reference to the haproxy logs and their paths in the filebeat.yml file? I cannot find any reference to operating like this in documentation. Any assistance would be greatly appreciated.