Mapping for Elasticsearch [easy]

I've got an easy question.

I've got this file:

# epoch, metric1, metric2, metric3
1576425930,0.0718,0.0127,1

How can I tell Filebeat to send it to Elasticsearch and use the correct mapping.
My config file currently looks like this:

filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /tmp/file.csv
filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false
  reload.period: 10s
setup.template.name: "test"
setup.template.fields: "/etc/filebeat/map.yml"
setup.template.settings:
  index.number_of_shards: 1
cloud.id: <snip>
cloud.auth: <snip>
processors:
 - drop_fields:
     fields: ["type", "beat.name", "beat.version", "_type", "_score", "_id", "@version", "offset", "host", "container", "input", "host", "agent", "log", "_score"]

Data is send to Elasticsearch, and a index is created with name; filebeat-, this is undesired.

Anyone can help me out?

Hey @Kevin_Csuka,

Sorry, I think I don't fully understand the question, let me add some comments to see if they help :slight_smile:

Do you mean that an index is created with the exact name "filebeat-", or with a name that starts with filebeat-? If it is a name that starts with filebeat- this is the expected and recommended behaviour. What name were you expecting?

To what mapping do you refer? If you refer to the mapping of the fields in the CSV file, you may use the decode_csv_fields processor, this can parse each csv line and put them in a field as an array. Once in an array you can use the extract_array processor to define your mapping from the position of the value to some specific field.

You can find a good example of the use of decode_csv_fields with extract_array in the panw module: https://github.com/elastic/beats/blob/v7.5.0/x-pack/filebeat/module/panw/panos/config/input.yml#L23

2 Likes

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.