Hi, I was looking to setup a log shipping system from our servers to s3 for storage. However, looking at the current (5.0/master) version of filebeat, it seems that there is no output mechanism to load the logs into s3 directly.
If I was to setup a log shipping system to s3, would I have to setup an intermediate logstash server using the logstash s3 {} output plugin? I was hoping to avoid that, but I just want to make sure that was the correct way to do it?
Options are you pushing from beats to logstash and use logstash to write to S3, or use logstash to tail logs files.
As we put quite some effort into supported outputs and are very cautious about adding any new ones, the chances of getting a PR adding s3 merged will be quite low I think.
Alternatively you can still implement S3 yourself and compile customized filebeat with S3 support. See this post for example. You can import and instantiate any beat in your own main function.
I am not a big fan of using Beats, except maybe packet beat.
You don't need a logstash server to receive the filebeats, just get rid of filebeat.
Logstash has all the features you need, and you can just use logstash, to forward your your log files directly to s3
while logstash does use more memory that is cheap now a days, (you can probably tune it down to handle your volume
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.