Hi,
I would like to create multiple ingestion pipelines in Elastic Cloud, each of which containing a pipeline configuration (inputs, filters, outputs).
Different pipelines are being created to support different customers (we’re an MSP), therefore each pipeline will be slightly different, in that different syslog and beats ports will be specified in the respective inputs. Likewise, outputs will be different in terms of index names, each of which will contain a unique customer Id (taken from a CRM system).
So far, I’ve used Python to search for strings matching the pattern of certain placeholders (ports and customer Id), update them with new values, and then create the pipeline in Elastic Cloud using the API. In addition, I’m also reading and updating logstash.yaml with new pipeline IDs.
Ideally, the pipeline configuration (inputs, filters, outputs) would be in a format recognised by Python (and other languages) such as JSON or YAML (like other Logstash configuration files), so it could be easily parsed and the key values updated, rather than searching for and replacing strings which seems clunky in what appears to be a proprietary configuration format?
What I would like to know is, is there a better way of achieving this, as in deploying custom pipelines in Elastic Cloud that contain differences based on the customer the pipeline relates to? Am I going about this the wrong way? Can the pipeline configuration be parsed using Python or some other method?
Any advice is much appreciated.
Thanks,
Matt