Parse rancher text logs using filebeat

Hi,

I have serveral k8s clusters running on rancher 2.3.1 sending several GB of logs per second and causing disk pressure on source side .

To solve source bottleneck , logs are being sent to syslog server and get written to text files, configure syslog program variable to be cluster name , so each cluster logs get written to separate files on syslog side .

I would like to use filebeat or logstash on syslog server to parse logs and send them to elasticsearch 7.x . I can't get grok to capture k8s meta data like pod name , cluster name , deployment and any other relevant information which requires extraction of variable number of key/value pairs.. Also I would like to create separate index per cluster ( log file)

Any feed back on rancher logs parsing is appreciated

Thanks

Hi @tru64gurus,

Did you try using the system syslog fileset?https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-module-system.html#_syslog_fileset_settings It should help you reading logs from syslog.

Once they are in, probably you can make use of add_kubernetes_metadata processor to enrich them.

Best regards

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.