Filebeat+CSV

Hi,
I'm quite frustrated as how to handle things. Documentation is quite confusing since there are few ways to do things in detail but no sufficient overview.
Alright, so here are my questions. I would highly appreciate some guidance:

I want to set up a pipeline that reads a CSV file, index the changes live, and visualize in Kibana. So far I was successful in getting this done in a static way => once the csv is read, that's it, nothing is changed. I need to have the pipeline constantly checking for changes in the csv (every 5 seconds or so, which is the default from Filebeats as far as I got).

  1. I experimented both with Logstash and Filebeat. For some reason, including Logstash is making things more tricky. I want to make my pipeline as simple as possible, which is why I want to stick to Filebeat only. Is this possible?

  2. How do I map fields through Filebeat? the setup.template.fields parameter is not having any influence.
    I want to have a custom index template name (not filebeat-version-date) which is what I did in the settings file. Before everytime I fire Filebeat I manually delete the old index template, clean the data folder and logs folder. With the setup.template.patter, setup.template.name parameters nothing is happening. Only when I disable these, the index template appears with the default name in Kibana's ES management section.

I checked every single forum, post, tutorial, all. None is consistent, all is confusing, no clear path. Some sources say "filebeat.prospectors" others say "filebeat.inputs", just as an example.

How can I realize the above pipeline EXACTLY? Only through Filebeat. If Logstash needs to be involved for sure, again, how exactly can I do that? The steps mentioned in the tutorials are not fully working for me.

The csv I have doesn't have any log info but handling it as a log file is the best approach (needs to be set as a log type, mentioned in a forum reply).

All setting and config and yml files are good to go. Why having a live feed from a csv is such a tricky thing to achieve?

Most importantly, I have a previously created index template and mappings, created through the drag-n-drop option of another csv file into Kibana.
How can I make a new csv file get collected and constantly monitored through Filebeat, indexed by ES and mapped using the previously defined map? I tried it all...it's probably something small that needs to be set somewhere.

Could you please provide a consistent, step by step guidance on how to achieve this? Including any operations that needs to be manually done, like deleting previous templates if needed, even though the overwrite flag was true.

Can someone please reply?!

For parsing csv-s, do I need Fileeat+Logstash or Filebeat alone is enough?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.