In order to send my elasticsearch indexes to aws S3, I find myself having to install the aws-cloud plugin on my server
For that, I have to enter this order while being in the file /usr/share/elasticsearch: sudo bin/plugin install elasticsearch/elasticsearch-cloud-aws/2.7.1
But I have two problems with this command. The first is that the /bin/plugin file does not exist for me. I only have a plugins folder or a /bin/elasticsearch-plugin file. So I guess the correct file is /bin/elasticsearch-plugin
So I tried to make this order: sudo bin/elasticsearch-plugin install elasticsearch-cloud-aws/2.7.1
I put 2.7.1 because it seems to me that this is the latest version of the plugin, and I updated it a little elasticsearch so the versions should be compatible
The result I get with this command is:
sudo bin/elasticsearch-plugin install elasticsearch-cloud-aws/2.7.1
-> Installing elasticsearch-cloud-aws/2.7.1
-> Failed installing elasticsearch-cloud-aws/2.7.1
-> Rolling back elasticsearch-cloud-aws/2.7.1
-> Rolled back elasticsearch-cloud-aws/2.7.1
A tool for managing installed elasticsearch plugins
Non-option arguments:
command
Option Description
------ -----------
-E <KeyValuePair> Configure a setting
-h, --help Show help
-s, --silent Show minimal output
-v, --verbose Show verbose output
ERROR: Unknown plugin elasticsearch-cloud-aws/2.7.1
Do you have an idea to help me solve this problem? Thank you in advance
I currently use graylog, so I have graylog and elasticsearch installed on the same server. The goal would be to keep 1 year of logs.
The server has 1T of disk space but it is already almost saturated, because it receives on average 30G per day
The goal would be to keep a month of logs and then send them to a S3 storage
My configuration is normally completed on AWS, I created a bucket with in it my graylog folder, a user named "svc-graylog" who has all the rights for this bucket
the s3.client.default.access_keys and the 3.client.default.secret_key match the user I created in aws S3 to manage the siem bucket?
Or maybe I should create an access point on S3 I juste I just found out this
If you are using access and secret keys, yes you need to enter them in the keystore.
So the S3 Client which is used by the plugin will know how to access your S3 buckets.
You might have to define the region as well in elasticsearch.yml.
Okay thank you, sorry I’m asking a lot of questions, but I can’t see in the doc how to fix the date, for example if I only want to send indexes that are more than a month old
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.