Filebeats fields mapping to logstash


(Dan Poltawski) #1

I'm currently evaluating using the elastic stack (6.1.2) and trying to get going with an initial basic config. I see that filebeat comes with some field mapping defined for the core modules (system/apache etc) and as far as I can tell, if I configure filebeat to send logs to elasticsearch directly then those records will be inserted with the useful predefined mappings.

However, i'd like the power of being able to manipulate other logs with logstash and it seems like if I put logstash in the middle then I don't get the same power - and I can't see a way of achieving the same without manually adding all the grok patterns myself? Is there a better way without manually defining them all myself?

My logstash config is currently exactly as mentioned in the docs:

input {
  beats {
    port => 5044
  }
}

# The filter part of this file is commented out to indicate that it is
# optional.
# filter {
#
# }

output {
  elasticsearch {
    hosts => "localhost:9200"
    manage_template => false
    index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}" 
    document_type => "%{[@metadata][type]}" 
  }
}

(Dan Poltawski) #2

I've just discovered the ingest/pipeline.json - which I suppose I can manually transcribe into the logstash config, I'm just wondering if i'm missing another trick.


(Steffen Siering) #3

The pipeline.json files are for use with Elasticsearch Ingest Node, not Logstash. The pipeline name is passed to logstash, via [@metadata][pipeline]. So you can configure the Elasticsearch output to use the pipeline as given by filebeat, or try parsing in logstash and ignore the pipeline.

There are some Logstash docs explaining how to use filebeat modules with logstash. Also checkout the sample pipeline configurations. Instead for matching on fileset.module, one can start a pipeline condition with:

if [@metadata][pipeline] == "<pipeline-name>" {
    mutate {
            remove_field => [
                "[@metadate][pipeline]"
            ]
    }

   ...
}

This way you can fallback to the ingest pipeline configurations from filebeat, but have some of the pipelines being converted/executed in logstash.

For the template, you can ask filebeat to export a template.json file. This one can be used with logstash or curl, to manage the actual template.


(system) #4

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.