I was going through the workflow of ELK stack and bit confused on which is required in which stack etc.
( I was mostly looking into a Public Dataset example from official github examples)
(a) The example uses filebeat to directly send the data to Elasticsearch (and NOT via logstash)
(b) The pipeline/processors/grok is loaded into ElasticSearch endpoint directly as per the example
(c) The index template also loaded into elasticsearch directly (Though it has some errors)
So my queries are
- Can the pipelines/processors/grok be loaded into elasticsearch tier directly? My expectation reading the docs was pipeline elements go into logstash tier only.
- Where does the pipeline/index template exists? does it exist as a physical json file or gets indexed into elasticsearch?
- Is the example bypassing logstash because the data is sent by filebeat? Any option to directly load into elasticsearch other than filebeat?