Central pipeline management - beats strategy

Hi Everybody,
we'd like to have some strategy for logstash and beats and we're using centralized pipeline management. Rather we'd like to use it :slight_smile: The idea is to not configure anything on logstash but only in Kibana (via GUI). The question is for experienced person who can advise which way we can go.
We're running on Elk7.9.2 and have two logstashes. Here you have our central pipeline definition:

input {
  beats {
    port => 5044
    ssl => true
    ssl_key => 'xx'
    ssl_certificate => 'xx'
  }
}
filter {
}
output {
  if [@metadata][pipeline] {
    elasticsearch {
      hosts => ["https://es01","https://es02","https://es03"]
      manage_template => false
      index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
      pipeline => "%{[@metadata][pipeline]}"
      user => "xxx"
      password => "xxx"
      ssl => true
      ssl_certificate_verification => true
      cacert => "/usr/share/logstash/config/certificates/ca/ca.crt"
    }
  } else {
    elasticsearch {
      hosts => ["https://es01","https://es02","https://es03"]
      manage_template => false
      index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
      user => "xxx"
      password => "xxx"
      ssl => true
      ssl_certificate_verification => true
      cacert => "/usr/share/logstash/config/certificates/ca/ca.crt"
    }
  }
}

In logstash.yml we got definition of the above pipeline (on both logstashes):

xpack.management.pipeline.id: ["logst_beats_pipe"]

It's working right now, but we got only one pipeline and it's less "elastic". Here is our strategy:

  1. prepare more central pipelines for beats:
    a) port 5044 (as above) -> logst_beats_pipe (logstash.yml)
    b) port 5045 -> logst_beats_pipe2 (in logstash.yml) - for example with some grok filter
    c) port 5046 -> logst_beats_pipe3 and so on. (in logstash.yml) - for example with some other filter.
    In beats (filebeat/metricbeat) we choose particular pipeline depending on what we want to filter/collect etc.
    This approach allow us to prepare more pipelines in advance (let's say not active) and we don't have to configure logstashes later on production's site. The only configuration is mady in Kibana and beats config (pipeline id). All configuration is made by Kibana GUI. Is this correct ? Or You advice to do it in a completeley different way ?
  2. We'd like to use ingest pipeline and is there a chance to feed the input of this pipeline by the output of the specific central pipeline (of course in Kibana)?
  3. Any other suggestions will be helpful.
    The main reason is to not touch config in logstash lately and have still some more options.

Many thanks in advance

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.