Snapshot Lifecycle Management (SLM) is a way to schedule and define snapshot policies using the Elasticsearch API or the Kibana UI. It also allows to check the existing snapshots and restore them directly on Kibana.
SLM requires a basic license. Snapshot/Restore functionality do not require any license and it's available since Elasticsearch 1.4.
SLM has been introduced on the 7.4.0 release and further improved on 7.5.0 (with the introduction of a retention policy).
This article will explain how to setup a S3 repository on Elasticsearch targeting a single node Minio.io instance and define a snapshot policy with SLM.
As a prerequisite, you must be running Elasticsearch and Kibana 7.4 or 7.5 with at least basic license.
Minio will be running on Docker. It is not mandatory to use Docker but this guide covers only this kind of setup.
As a reminder, please consider the S3 Repository plugin is tested with AWS S3.
Installing the S3 Repository plugin
In order to target an S3 repository, we need to install the S3 Repository plugin on all the nodes of our Elasticsearch cluster.
Each Elasticsearch node must be restarted after the installation of the plugin.
The binary elasticsearch-plugin
is located in the installation path of Elasticsearch, which depends on the installation method you've chosen.
- Run the following command to install the plugin.
sudo bin/elasticsearch-plugin install repository-s3
-
Restart the Elasticsearch instance.
-
Repeat for all the nodes.
-
Verify the plugin is installed on all the nodes using the API
GET _cat/plugins?v
.
Notes:
- If you're in an air-gapped environment, please check the offline installation in the documentation.
- If you're using Docker, you can create a derived image using:
FROM docker.elastic.co/elasticsearch/elasticsearch:7.5.0
RUN bin/elasticsearch-plugin install -s -b repository-s3
- Remember you'll need to upgrade the plugin to keep it aligned with your Elasticsearch version as detailed in the rolling upgrades documentation.
Installing a single Minio node on Docker
For the scope of this tutorial, we'll be running a single Minio node using Docker.
This is not a production-ready setup and please follow the official Minio documentation for a safe and reliable setup.
- Start our local Minio instance, you can use:
docker run -p 9000:9000 -e "MINIO_ACCESS_KEY=testkey" \
-e "MINIO_SECRET_KEY=testsecret" -v /mnt/minio/data:/data \
minio/minio server /data
-
Verify it is correctly working accessing
http://<ip of the minio instance>:9000
on your browser -
Create a bucket clicking on the
(+)
icon on the bottom right of the web UI and selectingCreate bucket
. Typetestbucket
and pressenter
. -
A new bucket named
testbucket
will appear on the Minio web UI.
Configure the S3 Client and Repository on Elasticsearch
The S3 Repository plugin is designed to work with the AWS S3 service and it's setup is documented here.
In order to make it work our local Minio instance, we need to define a custom S3 client, which we will call myminio
.
- Add the following line to
elasticsearch.yml
on every node of the cluster
s3.client.myminio.endpoint: http://<ip of the minio instance>:9000
# the following line is required since 7.5.0, see https://www.elastic.co/guide/en/elasticsearch/reference/7.x/breaking-changes-7.4.html#_the_s3_repository_plugin_uses_the_dns_style_access_pattern_by_default
s3.client.myminio.path_style_access: true
- Add the credentials to the secure keystore. Execute those commands with the same user which is running Elasticsearch (
elasticsearch
by default).
bin/elasticsearch-keystore add s3.client.myminio.access_key
# testkey
bin/elasticsearch-keystore add s3.client.myminio.secret_key
# testsecret
- Restart every node of the cluster. It is necessary for
elasticsearch.yml
changes, as the keystore settings we're using are reloadable.
Notes:
- Minio supports DNS style access but requires you to setup a FQDN associated to the IP where Minio is running. This is out of scope for this tutorial.
Create a repository using the SLM UI in Kibana
Execute those steps as elastic
user (which has superuser
role) or check the notes below.
- Access Kibana under
Management / Snapshot and Restore / Repositories
tab - Select
Register a repository
- As
Repository name
provide an arbitrary name (e.g.minio
) - Select
Repository type
selectAWS S3
- Click
Next
- Input the
Bucket
name, which in our case istestbucket
- Input the
Client
name, which in our case ismyminio
- Click
Register
- Click
Verify repository
to ensure the S3 repository is ready to be used
Create a Snapshot policy
Execute those steps as elastic
user (which has superuser
role) or check the notes below.
- Access Kibana under
Management / Snapshot and Restore / Policies
tab - Select
Create a policy
- Input the
Policy name
, in our casetest-snapshot
- The
Snapshot name
will be the name of the generated snapshot. It supports the date math and for this test we can use<daily-snapshot-{now/d}>
(it will include the date in its name) - As schedule, you can set a cron pattern or choose every
hour
for example. - Click
Next
- On the
Snapshot settings
tab, keep everything as default or customize the index patterns to be included in the snapshot. - For this test, deselect the
All indices, including system indices
- Select
Use index patterns
- Input
.kibana*
: we will snapshot all the dashboards and visualizations - Click
Next
- On the
Snapshot retention
tab, setExpiration
of2 hours
and setSnapshots to retain
to1
(Minimum count
) and2
(Maximum count
). It means: keep all the snapshots which are 2 hours old, always keep at least 1 and at maximum 2 successful snapshots, even if they're older than 2 hours. - Click
Next
- Check the recap and click
Create policy
- Wait until the
Next snapshot
or trigger it manually using theRun now
icon in thePolicy
tab (it is a symbol on right side of the new policy you've created). - A snapshot will appear on the
Management / Snapshot and Restore / Snapshots
tab and you'll be able to check all the details
Notes:
- Please be aware the
Retention
policy is enforced once per day by default (seePolicies
tab) for all the policies. You can trigger it manually via the SLM UI or change its scheduling. You can trigger the snapshot retention policy manually also issuing the requestPOST _slm/_execute_retention
(documentation) - Snapshots within the same repository are incremental
- Do not manually edit or delete the files saved in S3
- All the operations done via the SLM UI can be done via the SLM APIs (documentation). You can check the policy we've created using
GET _slm/policy/test-snapshot
- Please follow the instructions on the documentation on how to setup a dedicated user with the rights to perform snapshots and manage SLM policies, repositories and snapshots.
- When using Minio S3 with TLS and you are not using publicly trusted certificates, you'll need to add the public CA certificate to the JVM truststore used by Elasticsearch. This is out of scope for this tutorial.
Recap
This tutorial should get you up and running with SLM.
Check out the other supported repositories in our documentation:
-
fs
(shared file system), which requires every Elasticsearch node to mount the same shared disk (SMB, NFS) -
gcs
, Google Cloud Storage repositories -
hdfs
, Hadoop environments -
azure
, Azure storage repositories