I'm a beginner to the Elasticstack world and learning to architect the collection of logs for visualization.
One file I'm working on is a CSV file. I want to feed it into Elasticsearch to visualize in Kibana. I assume I use filebeat for this.
Question #1 - What is logstash and would I need it in this scnario?
Question #2 - Is there a template of a config file that will get me started to collect files from a directory and feed to elasticsearch?
In the not-too-distant future there will be an easier way to get the ingest pipeline config and Filebeat config than typing them out by hand.
Starting in version 7.7 (not released yet but not too far off) after importing a CSV file using the File Data Visualizer in Kibana there will be an option to display a sample Filebeat config that would be appropriate for CSV files with the same structure that you uploaded as a sample - it's the change that was made in https://github.com/elastic/kibana/pull/58152 and there's a screenshot in that PR. The File Data Visualizer will leave behind the ingest pipeline that was appropriate for the CSV columns that it saw, so you'll have nearly everything you need. (Just a few details like hostnames and passwords need to be dealt with manually.)
As I said, you cannot do this today but 7.7 is the next release, so it won't be that long before it's possible.
Thank you @dadoonet, this is helpful. Excuse my ignorance, but for my education, can you confirm the flow?
CSV file is on my PC in a c:/CSV folder
The file is imported through Filebeat, and sent to Elasticsearch.
The ingestion part I have a few grey areas. Is there anything I have to install to use the csv processor? Does this take place before of after the filebeat import?
One more question to help connect the dots (I haven't ingested any files yet, this will be my first time using a Windows installation of Elasticstack)
Would the pipeline be in a different configuration file and run prior?
I've been reading and watching youtube videos, there seems to be many ways to do one half of the configuration but not a step by step of my end to end use case.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.