[Filebeat] Huge mapping (nearly 5k fields) when ingesting logs

Hello,

i am currently experimenting with filebeat and noticed that, when filebeat creates the standard indicies, it comes with a huge count of field mappings.

To me it looks like the index-template specifies every possible field for every possible filebeat module.

Is this a configuration-issue on my end?
My filebeat.yml looks like this

    filebeat.inputs:
    - type: log
      paths:
        - /usr/share/data/access.log
      pipeline: "access_combined_wcookie_parsing_pipeline"
      processors:
        - add_tags:
            tags: ["access_combined_wcookie"]

    output.elasticsearch:
      hosts: ["elasticsearch:9200"]
      username: "${ELASTIC_USERNAME}"
      password: "${ELASTIC_PASSWORD}"

Any ideas why this happens? Any fixes?

Best regards,
Mo

I suppose it should work this way. You need to setup all fields in advance (due to the solution design). If you prefer to have only relevant fields, please take a look at the Fleet/Ingest Management in Kibana. It's the agent based environment, where you automatically install only parts you really need (e.g. a subset of fields used by enabled integrations).

Hey thanks for the answer.
I as thought about it for a while it makes sense.
It is also grouped by types (mysql, apache, etc.) (i guess) so it would makes sense now.

As you just said, Filebeat has to create every possible type-mapping because all the logs flow into the same index.

Tho imagine apache-, mysql- and aws logs are all stored in the same filebeat-7.9.1-2020-10-16 index and i would create an index-pattern in kibana, e.g. filebeat*. Is there a way to filter during search only for apache-logs?
It seems quite a lot to search through all possible filebeat logs and then to filter based on the type.

Am i missing something?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.