We are storing our audit logs in GCS bucket. we would like to ingest them to Elasticsearch when required - not regularly - using filebeat. I have checked S3 option where it let us use s3 like storages as input using providers.
I'm using following configuration but it is not writing any data however when I test the filebeat configuration it is fine.
I doubt my input configuration is not right in someway. Please check the following and help me understand what's wrong
Thanks @legoguy1000
I tried above configuration however I couldn't understand the part where it is still checking for bucket_arn or queue_url even though I provided non_aws_bucket_name.
error:
WARN [aws-s3] awss3/config.go:54 neither queue_url nor bucket_arn were provided, input aws-s3 will stop
INFO [crawler] beater/crawler.go:141 Starting input (ID: 17738867761700079737)
INFO [crawler] beater/crawler.go:108 Loading and starting Inputs completed. Enabled inputs: 1
INFO [input.aws-s3] compat/compat.go:111 Input aws-s3 starting {"id": "F62D1E3EA5C30879"}
INFO [input.aws-s3] compat/compat.go:124 Input 'aws-s3' stopped {"id": "F62D1E3EA5C30879"}
should we do any changes on type? probably changing aws-s3 to gcp-gcs (I'm not sure)
My apologies, the ability to poll non AWS buckets was only added to 8.0.0 and want backpack to 7.x. You'll have to wait until 8.0 is released to be able to too what I explained. But to provide a bit more clarification, you can't just change the input names. There is a specific list of inputs that can be used, Configure inputs | Filebeat Reference [7.16] | Elastic.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.