Hi First want to let you know i am very new to Elastic and the community, so if i am posting in wrong category plz ignore.
Just enrolled myself into Introduction to Observability Logging and going through the Lab environment.
I was not able to understand the role of ingesting pipelines explained there, where in the system will able to find the code we have written, is there any default pipelines available.
Ingest pipelines allow you to get a write request intercepted in the Elasticsearch cluster, allowing you to enrich & transform you document prior to getting it indexed.
The snippets you present in your post on one end reflect the definition/configuration of a pipeline called setter (keep in mind that the name can be freely chosen). A pipeline is made up of one or more so called processors, in this case it's about 2 set-processors which will ensure that every single document that makes it through this pipeline will have the 2 fields attrs.env and attrs.zone set to dev and us-west. The fist snippet shows how you would like your documents to look like (the 2 fields that you want to set).
The sole remaining question is, how you specify the pipeline a document should get routed through. For this you have several options:
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.