Best practice to organize Grok patterns in production

Hello,

In our system we're progressing with ELK stack usage on a larger scale. We're are about to connect a large amount of systems and almost all of them require a custom Grok pattern for log parsing.
I'm wondering what is the best practice to organize all this patterns, so that they're easy to maintain.
Right now my idea look as follows:

  1. In a common location like /etc/logstash/patterns I will create a directory for each system - for example /etc/logstash/patterns/system_xyz

  2. In this folder I will create a file "system_xyz_pattern", which will store the Grok pattern for that system

  3. In logstash config folder I will add a file system_xyz.conf with the configuration inside:

    filter {
    if [system_name] == "system_xyz" {
    grok {
    patterns_dir => ["/etc/logstash/patterns/system_xyz"]
    match => [ "message", "%{system_xyz}" ]
    overwrite => ["message"]
    }

     if "_grokparsefailure " not in [tags] {
       date {
         match => ["timestamp", "yyyy-MM-dd HH:mm:ss"]
       }
     }
    

    }
    }

Is there any way to simplify this? Step #3 seems to be redundant - it does nothing but match the pattern name from Grok file with a filed from an event. Is it possible to to write some generic code to handle this for all systems?

Can you propose any better way to organize hundreds of Grok patterns?

Well, if the configuration files are sufficiently similar you could of course write a piece of code to generate the configuration files from whatever form is most convenient for you.

FWIW I've never bothered setting up pattern files. If the stock grok patterns have been insufficient I've just inlined the necessary regexps in the configuration files.