At the moment, I don't have the capacity to take a snapshot of the ingested data or the indexes as such. I'm sure that 100GB isn't enough to back up ingested data.
But thinking about disaster recovery, I would like to have a backup of the configuration, such as all .conf files, policies, index templates, Kibana configurations, and Logstash .yml files, so that in the event of a major problem, I can restore the configuration and be ready to receive logs again.
I would appreciate your suggestions on whether it is feasible to use “Snapshot and Restore,” whether this section can support what I need, or whether it really does not apply.
Another option, I suppose, is to make a manual copy or use a script to save these configurations and files that I need every so often.
@juancamiloll Snapshot and Restore in Elasticsearch is designed to back up cluster state (index templates, ILM policies, ingest pipelines, component templates) and index data (documents in all shards).
It does not store :
elasticsearch.yml
jvm.options
log4j2.properties
Security configurations (users, roles, certs)
Kibana saved objects
Logstash configs etc
So while it can cover logical configuration inside the cluster, it does not handle system- or app-level configuration files
Based on your problem statement the possible solutions could be :
Use “Snapshot & Restore” for Cluster State (Logic-Level Backup)
Even if you can’t afford to store all data, you can still snapshot the cluster state only.
Store the other files in git repo so that you have version control as well to track changes and history.
Kibana Objects (Dashboards, Visualizations, Index Patterns)
Kibana objects are not included in snapshots, but you can export them easily via API:
POST .kibana/_export{"objects": ,"includeReferencesDeep": true}
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.