It's purpose is for being used with Elasticsearch. But beats can publish events to Logstash/Redis/Kafka as well. Normally one uses Logstash to finally index values into Elasticsearch. The @metadata
field is published to all outputs, but Elasticsearch. It contains all information needed to index events into Elasticsearch as the beat would have done it. No one prevents you from abusing the "information" in Logstash
Does the pipeline config go under the -type & path structure or as a global config (I.e. Where you have filebeat.config.
It's a per input setting. Use after - type: ...
- On ports can I use for example the same default logstash port 5044 for all the filebeats to send to Elasticsearch?
Filebeat suports one output only. You must use 1 port.
If I understand you correctly I would then have one input file on the logstash side that would have something like:
Yes and no. I don't think you can use conditionals in Logstash. But you can do this in filters (e.g. create one file with filters per pipeline):
filters {
if [@metadata][pipepline] == "somelog" {
mutate { remove_field = ["[@metadate][pipeline]"]} # remove `[@metadata][pipeline]` # remove @metadata.pipeline, so to not forward processed events to Ingest Node pipelines
# IMPLEMENT ME: actual filters for "somelog"
}
}
And in outputs one can do:
output {
if [@metadata][pipeline] {
elasticsearch {
...
"index" => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+yyyy.MM.dd}",
"pipeline" => "%{[@metadata][pipeline]}"
}
} else {
elasticsearch {
...
"index" => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+yyyy.MM.dd}"
}
}
}
By filtering+removing the pipeline in Logstash, but still forwarding unprocessed pipelines to Elasticsearch Ingest Node, you can make use of both, processing in Logstash and Ingest Node. Use cases:
- if you want to reuse filters written against Ingest Node (Logstash acts as Proxy for some data sources, yet can filter/process others).
- Migrate Ingest Node users to Logstash (one pipeline at a time)
- Migrate from Logstash to Ingest Node (one pipeline at a time)
Alternative to conditionals you can use pipelines in Logstash as shown here (forwarding to pipelines is still in beta): Pipeline-to-pipeline communication | Logstash Reference [8.11] | Elastic
In Logstash pipelines have inputs, filters, and outputs. And therefore it's own set of workers. This allows you to create more concurrent workers for some log-sources, but also increases management overhead on your side.