Filebeat+csv - to many fields indexed

Hi,
I've created a flow
filebeat (*.csv) -> elasticseach cloud -> kibana

Ingest pipe:
    PUT _ingest/pipeline/stat_csv_parser
    {
      "description" : "CSV Parser",
      "processors" : [
        {
          "grok": {
            "field": "message",
            "patterns": ["%{NUMBER:eventId},%{DATA:contentName},%{NUMBER:time},%{NUMBER:duration}"],
            "ignore_missing": true
          }
        },
        {
          "date":{
            "field":"time",
            "formats": ["UNIX"]
          }
        }
      ],
      "on_failure" : [
        {
          "set" : {
            "field" : "error",
            "value" : " - Error processing message - "
          }
        }
      ]
    }

CSV data is in a form:

1,movie2,1560411512478,120
1,movie1,1560411513234,10

My problem is that in addition to 4 fields, i get aprox 33 fields in my index.
I didn't touch fields.yml
Do I have to define my own fields.yml to have a clean index fields list?

This is my filebeat.yml:

filebeat.inputs:
  - type: log
    enabled: true
    paths:
      - /var/log/stat/*.csv

setup.template:
  enabled: true
  name: "myIndex"
  pattern: "myIndex-*"
  overwrite: true

cloud.id: "xxxxxxxxxxxxxxxxx"
cloud.auth: "elastic:xxxxxxxxxxxxxxxxx"
output.elasticsearch:
  enabled: true
  index: "myIndex-%{[agent.version]}-%{+yyyy.MM.dd}"
  pipeline: "stat_csv_parser"
  indices:
    - index: "myIndex"
      mappings:
      default: "myIndex"

What would is the best approach to ingest csv from the upper sample?
Thank you in advance!

What are the other 33 fields you are seeing in your index?

Shaunak

This is the first screen of Kibana index:

And one more:

Ok, I found a workaround.
Added "dynamic"=false to index creation lines.
But perhaps there is some better way with automated index creation