Elasticsearch HDFS Snapshot

Hi,

We are using ElasticSearch version 6.8 basic license.

I am making a copy of all indices and some monitoring indices and write and copy of the snapshot into hdfs files.
Is there a way to automate the snapshot to run once every 24 hours? and add 14 days retention on this snapshot? like after 14 th day I need to clean up the 1st created snapshot?

Any help would be greatly appreciated.

here is my script.(this is creating only once whenever i run this.)
PUT _snapshot/test_hdfs_repository
{
"type": "hdfs",
"settings": {
"uri": "hdfs://nn1:8020/",
"path": "/user/elasticsearch/repositories/test_hdfs_repository",
"conf.ha.zookeeper.quorum": "
host1:2181,host2:2181,host3:2181",
"conf.hadoop.http.authentication.simple.anonymous.allowed": "true",
"conf.hadoop.security.authorization": "false",
"conf.dfs.domain.socket.path": "/var/lib/hadoop-hdfs/dn_socket",
"conf.dfs.ha.namenodes.esdhcluster": "nn1,nn2",
"conf.dfs.namenode.rpc-address.cluster.nn1": "nn1:8020",
"conf.dfs.namenode.rpc-address.cluster.nn2": "nn2:8020",
"conf.dfs.nameservices": "npcluster",
"conf.dfs.client.failover.proxy.provider.cluster": "org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider",
"compress": "false",
"chunk_size": "10mb"
}
}

PUT _snapshot/test_hdfs_repository/snapshot_full1004?wait_for_completion=false
{
"include_global_state": true
}

I use to automate this process with the same above command in 6.8. but I don't remember that step now!!

Thank You.

If you upgrade to 7.13, you can benefit from this: Tutorial: Automate backups with SLM | Elasticsearch Guide [7.13] | Elastic

Hi Dadoonet,

Thanks for your response.

Yes, I am aware of this feature on 7.8. but currently, I am on 6.8. Not sure how to automate this process?

I think that curator might help then.