Hi,
I'm quite frustrated as how to handle things. Documentation is quite confusing since there are few ways to do things in detail but no sufficient overview.
Alright, so here are my questions. I would highly appreciate some guidance:
I want to set up a pipeline that reads a CSV file, index the changes live, and visualize in Kibana. So far I was successful in getting this done in a static way => once the csv is read, that's it, nothing is changed. I need to have the pipeline constantly checking for changes in the csv (every 5 seconds or so, which is the default from Filebeats as far as I got).
-
I experimented both with Logstash and Filebeat. For some reason, including Logstash is making things more tricky. I want to make my pipeline as simple as possible, which is why I want to stick to Filebeat only. Is this possible?
-
How do I map fields through Filebeat? the setup.template.fields parameter is not having any influence.
I want to have a custom index template name (not filebeat-version-date) which is what I did in the settings file. Before everytime I fire Filebeat I manually delete the old index template, clean the data folder and logs folder. With the setup.template.patter, setup.template.name parameters nothing is happening. Only when I disable these, the index template appears with the default name in Kibana's ES management section.
I checked every single forum, post, tutorial, all. None is consistent, all is confusing, no clear path. Some sources say "filebeat.prospectors" others say "filebeat.inputs", just as an example.
How can I realize the above pipeline EXACTLY? Only through Filebeat. If Logstash needs to be involved for sure, again, how exactly can I do that? The steps mentioned in the tutorials are not fully working for me.
The csv I have doesn't have any log info but handling it as a log file is the best approach (needs to be set as a log type, mentioned in a forum reply).
All setting and config and yml files are good to go. Why having a live feed from a csv is such a tricky thing to achieve?