Mapping for Elasticsearch [easy]

I've got an easy question.

I've got this file:

# epoch, metric1, metric2, metric3

How can I tell Filebeat to send it to Elasticsearch and use the correct mapping.
My config file currently looks like this:

- type: log
  enabled: true
    - /tmp/file.csv
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false
  reload.period: 10s "test"
setup.template.fields: "/etc/filebeat/map.yml"
  index.number_of_shards: 1 <snip>
cloud.auth: <snip>
 - drop_fields:
     fields: ["type", "", "beat.version", "_type", "_score", "_id", "@version", "offset", "host", "container", "input", "host", "agent", "log", "_score"]

Data is send to Elasticsearch, and a index is created with name; filebeat-, this is undesired.

Anyone can help me out?

Hey @Kevin_Csuka,

Sorry, I think I don't fully understand the question, let me add some comments to see if they help :slight_smile:

Do you mean that an index is created with the exact name "filebeat-", or with a name that starts with filebeat-? If it is a name that starts with filebeat- this is the expected and recommended behaviour. What name were you expecting?

To what mapping do you refer? If you refer to the mapping of the fields in the CSV file, you may use the decode_csv_fields processor, this can parse each csv line and put them in a field as an array. Once in an array you can use the extract_array processor to define your mapping from the position of the value to some specific field.

You can find a good example of the use of decode_csv_fields with extract_array in the panw module:

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.