You can use an http_poller input to fetch the file from S3 (or a file input if the file is actually in a local file system), then use a csv filter to split the columns into separate fields, then use an elasticsearch output to ship the resulting documents to Elasticsearch.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.