Currently, I have a logstash pipeline that is forwarding logs to Elasticsearch. I am planning to send logfiles to S3 as a batch using the aws s3 cli. What are some drawbacks to this approach? Is it possible to account for which logs have already been uploaded to S3?
I might have bulldozed through the question. I'm looking to implement an S3 AWS storage option for ES cluster. Any insights to the pros/cons of using S3 as a storage option for an ELK stack? Does it depend on data load?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.