Parse rancher logs with logstash

Hi,

I have serveral k8s clusters running on rancher 2.3.1 sending several GB of logs per second and causing disk pressure on source side .

To solve source bottlneck , logs are being sent to syslog server and get written to text files, configure syslog program variable to be cluster name , so each cluster logs get written to separate files on syslog side .

I would like to use logstash on syslog server to parse logs and send them to elasticsearch 7.x . I can't get grok to capture k8s meta data like pod name , cluster name , deployment and any other relevant information which requires extraction of variable number of key/value pairs.. Also I would like to create separate index per cluster ( log file)

Any feed back on rancher logs parsing is appreciated

Thanks

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.