Multi Filebeat prospector paths and Filebeat pipelines in one Logstash config file for ES/Kibana

Hi,
hope youre all doing fine.
I'm quite new to the elastic stack and in learning progress.
I play with the plugins and datasets to become more familar about this fantastic tools.
This is my 2nd post :slight_smile:

I want to do following but I'm not sure about the right way to do:

Logstash pipeline.yml:

  • pipeline.id: test
    pipeline.workers: 1
    pipeline.batch.size: 1
    path.config: "\tmp\bin\test.conf"
  • pipeline.id: anothertest
    pipeline.workers: 1
    pipeline.batch.size: 1
    path.config: "\tmp\bin\anothertest.conf"

filebeat.prospectors:

  • input_type: log
    paths:

    • C:\tmp\test.csv
    • C:\tmp\anothertest.csv

    the logstash config files working on this different csv logfiles of different structur
    the only basic setting is:

    input {
    beats {
    port => "5044"
    }
    }

    here some adding and mutating

    output {
    elasticsearch {
    hosts => ["localhost:9200"]
    index => ["test"] or depending on the file ["anothertest"]
    }
    }

    How can I manage both csv as an indiviual index at the same time?
    I thought something like this (which is quite pragmatical thinking):

    multi.config:

    input {
    beats {
    port => "5044"
    pipeline.id: test
    pipeline.id: anothertest
    }
    }
    output {
    elasticsearch {
    hosts => ["localhost:9200"]
    index => pipeline.id ["test"]
    index => pipeline.id ["anothertest"]
    }
    }

Thanks in advance
Regards
Thorben

I do this with Logstash metadata

This is part of my Logstash output config

index => "%{[@metadata][log_prefix]}-%{[@metadata][index]}-%{+YYYY.MM.dd}"

Only use one pipeline so I can't really comment on that...

You can have one Filebeat input per file and add fields there that you use to generate @metadata fileds in Logstash. E.g.

# Adding @metadata needed for index sharding to Filebeat logs
mutate {
  copy => {
   "[fields][log_prefix]" => "[@metadata][log_prefix]"
   "[fields][log_idx]" => "[@metadata][index]"
  }
}

One thing to remember is that the @metadata fields are only available within Logstash...

Hi,
ok Im not familar with this so far but I could try.
Can you tell me where to place the second part?
Is this also a part of the Logstash config file or do I have to place it
inside the Filebeat.yml? And If, where to?

Thanks ans Brgds
Thorben

I manage Filebeat through Puppet. Using the Puppet module from Elastic I have the following config

/etc/filebeat/filebeat.yml
All configs except for inputs (what used to be prospectors) which looks like

...
filebeat:
  registry_file: "/var/lib/filebeat/registry"
  config.prospectors:
    enabled: true
    path: "/etc/filebeat/conf.d/*.yml"
  shutdown_timeout: '0'
  modules: []
...

/etc/filebeat/conf.d/foo.yml
One file per input (prospector) (starts with -
More info

So /etc/filebeat/conf.d/foo.yml would like like

- type: log
  paths:
    - "/var/log/apache2/*"
  fields:
    log_prefix: dc
    log_idx: apache2
  fields_under_root: false

Hi,
yeah finaly I got it and It works so far!

In filebeat.yml add as per your advice

fields:
names: myname

and grap them in logstash.conf by
if [fields] [names] == "myname" {
... do something
}
afterwards I pass them trough the same way to create indicies based on serveral files (fields)
if [fields] [names] == "myname" {
elasticsearch {
...
}
}

Thanks for the hint.
Regards

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.