I found Logstash - Filter csv Plugin interestingly but wondering whether i need to prepare a config file if the column header is different for the files which i like to see in Kibana. Is there any auto-intelligence to detect the header fields and pass it to Columns .
I have dozen of report files which have different column names and types . so looking for something to work with one single config file.
logstash does not have a notion of start and end of streams, as it was not meant for batch processing
this sort of functionality would require the input plugin (in this case the file input plugin) to inform the csv filter or codec that there was a new file, and the first line indicates there are N fields, with these names.
Also, if logstash is stopped/crashes and restarts, we'd either have to persist the header information for the file or do fseek to the start to read the header and fseek back to where it stopped.
Nothing about this is impossible, but it required a deeper integration between logstash components which is unlikely to happen in the future.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.