LogStash Pipeline Behavior

Looking through SOF-ELK's config files, I'm curious as to how the pipeline works on this implementation. SOF-ELK contains a few dozen pipeline config files, but they're in stages, none of them contain a full input->filter->output configuration. For example, this is the extent of one pipeline:

input {
  tcp {
    port => 6052
    type => "windows"
    tags => [ "json" ]
    codec =>   json {
      charset => "CP1252"
    }
  }
}

The other config files are similarly configured with filter and output configs targeting type to identify which events get processed. This leaves me with a couple questions:

  • SOF-ELK is built on ELK 2 (2.4 I believe), did 2.4 treat config pipelines differently than 6.x?
  • Does Logstash ingest all the config files present (if not explicitly defined in pipelines.yml) and then all ingested events go through all of them?
  • Is this a modified version of Logstash and it has never behaved this way?

I'm looking at importing these configs into 6.0 but I didn't know if I need to consolidate all the different configs into full pipeline configs or they can be left split apart like SOF-ELK's implementation.

Yes. For each pipeline, it gathers all of the files that match the regexp and concatenates them. That's true whether you are using pipelines.yml or just an old-school single pipeline called main. It reads events from each input, sends them through all of the filters, and writes the same event to every output unless you have conditional logic to make it do otherwise.

What/Where is the regexp that you are talking about?

Sorry, I mean if you use something like

- pipeline.id: main
  path.config: "/etc/logstash/conf.d/*.conf"

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.