Looking through SOF-ELK's config files, I'm curious as to how the pipeline works on this implementation. SOF-ELK contains a few dozen pipeline config files, but they're in stages, none of them contain a full input->filter->output configuration. For example, this is the extent of one pipeline:
input {
tcp {
port => 6052
type => "windows"
tags => [ "json" ]
codec => json {
charset => "CP1252"
}
}
}
The other config files are similarly configured with filter and output configs targeting type
to identify which events get processed. This leaves me with a couple questions:
- SOF-ELK is built on ELK 2 (2.4 I believe), did 2.4 treat config pipelines differently than 6.x?
- Does Logstash ingest all the config files present (if not explicitly defined in pipelines.yml) and then all ingested events go through all of them?
- Is this a modified version of Logstash and it has never behaved this way?
I'm looking at importing these configs into 6.0 but I didn't know if I need to consolidate all the different configs into full pipeline configs or they can be left split apart like SOF-ELK's implementation.