Why filebeat create so many fields in the index of elasticsearch

Hi,all.I'm a newbie for ELK. We use filebeat to collect nginx logs and output to elasticsearch with the default template,but when I checking the kibana's index on the dashboard,I saw the index contains 1148 fields,why filebeat create so many fields ?

here is some of my filebeat config:

filebeat.inputs:

  • type: log
    enabled: true
    paths:
    • /data/log/nginx/access.log
      close_inactive: 10m
      json.keys_under_root: true
      json.overwrite_keys: true
      fields_under_root: true
      fields:
      app_id: api-ngx

output.elasticsearch:
hosts: ["10.0.1.6:9200","10.0.1.47:9200","10.0.1.48:9200"]
indices:
- index: "api-ngx-%{+yyyy.MM.dd}"
when:
contains:
app_id: api-ngx

its due to default property for certain type of logs.
Would suggest you to parse different type with the help of logstash and remove all the fields which you don't want

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.