I don't have any paid features of elasticsearch.
Older elasticsearch cluster version is 7.17 and the new one is 8.12.
Snapshot then restore.
just to be more precise i haven't upgraded from 7.17 to 8.12. These are two different clusters.
I want to move logs to a new cluster which is of version 8.12 and the older one is 7.17.
I have to go to snapshot and restore in stack management of older cluster and afterwards?
So
- upgrade to the latest 7.17
- go to the upgrade assistant and check that everything is ok
- if you are running on cloud.elastic.co, just click the upgrade button
If not, follow the guide Upgrade Elasticsearch | Elasticsearch Guide [8.14] | Elastic
without upgrading 7.17 cluster to 8.12 I cannot move logs from this cluster to the cluster version 8.12?
Yes you can. With snapshot & restore which was my first answer.
- Snapshot in 7.x cluster
- Restore in 8.x cluster
what are the steps for taking snapshot from 7.17 version of elasticsearch and restoring logs in 8.12 version?
Have a look at Snapshot and restore | Elasticsearch Guide [8.14] | Elastic
Basically:
- Create a repository on the 7.X cluster
- Create a Snapshot
- Share the FS folder on the new node or copy the files there over the network
- Create a repository on the 8.X cluster
- Restore the snapshot
If you are blocked at one of the steps, please share what is the issue or what you don't understand from the documentation. We'll be happy to help.
I made changes in all the yml files of elasticsearch nodes be it coordination, data and master.
path.repo: /mount/backups/my_backup
but it shows so many failed shards in snapshot and restore on kibana.
There are many indices shown in my_backup directory on one of the data node.
but the uuid doesn't match with any of the indices i wanted to take snapshot of as queried on kibana.
The UUID'S present their are different.
Hello @dadoonet ,
how are you?
i faced the same work , but i moved data from an ELK version 7.17.22 to an ECK-stack version 8.14.3 ( later I will upgrade to 8.16) managed by elastic-operator 2.14
I had severals problems :
1-
when i setted up a new APM ( eck-apm-server) , i found that the new eck-stack is managing apm indices with defferent way ( the documents structure seems changed ) so we cannot reindex ( remotly) from the legacy apm indices to the new one ( in eck-elasticsearch).
2-
the ilm policy, which seem weired to me .
for logstash indices ( they are feeded by fluentd) , i setted an ilm :
{
"policy": {
"phases": {
"hot": {
"actions": {
"rollover": {
"max_age": "7d",
"max_primary_shard_size": "5gb",
"max_docs": 15000000
},
"set_priority": {
"priority": 100
}
}
},
"warm": {
"min_age": "10d",
"actions": {
"set_priority": {
"priority": 50
},
"shrink": {
"number_of_shards": 1
},
"forcemerge": {
"max_num_segments": 1
}
}
},
"cold": {
"min_age": "30d",
"actions": {
"set_priority": {
"priority": 0
},
"allocate": {
"number_of_replicas": 0
}
}
},
"delete": {
"min_age": "90d",
"actions": {
"delete": {}
}
}
}
}
}
but when running
GET logstash-new-default-2024.12.04-000003/_ilm/explain
i got this output
{
"indices": {
"logstash-new-default-2024.12.04-000003": {
"index": "logstash-new-default-2024.12.04-000003",
"managed": true,
"policy": "custome-log-lifecycle",
"index_creation_date_millis": 1733350225293,
"time_since_index_creation": "4.78d",
"lifecycle_date_millis": 1733350225293,
"age": "4.78d",
"phase": "hot",
"phase_time_millis": 1733350225561,
"action": "rollover",
"action_time_millis": 1733350225761,
"step": "check-rollover-ready",
"step_time_millis": 1733350225761,
"phase_execution": {
"policy": "custome-log-lifecycle",
"phase_definition": {
"min_age": "0ms",
"actions": {
"rollover": {
"max_age": "7d",
"min_docs": 1,
"max_primary_shard_docs": 200000000,
"max_size": "15gb"
}
}
},
"version": 1,
"modified_date_in_millis": 1732800499186
}
}
}
}
as you can see here the rollover is not as i set in my ilm.
I deeper analyzed this and found that in the cluster settings, there's a default rollover configuration:
"cluster.lifecycle.default.rollover": "max_age=auto,max_primary_shard_size=50gb,min_docs=1,max_primary_shard_docs=200000000"
According to the documentation (Data stream lifecycle | Elasticsearch Guide [8.16] | Elastic), this setting is meant for data streams that don't have specific lifecycle settings, or for indices that aren't managed by ILM.
However, in my case:
- These are regular indices (not data streams)
- They're explicitly managed by ILM with a defined policy
- They have proper index template configuration:
{
"name": "logstash-new",
"index_template": {
"index_patterns": ["logstash-new-*"],
"template": {
"settings": {
"index": {
"lifecycle": {
"name": "custome-log-lifecycle",
"rollover_alias": "logstash-new"
}
}
}
}
}
}
Despite this, the _ilm/explain shows that the cluster default settings are being applied instead of my ILM policy settings. The indices aren't rolling over at my configured thresholds (15M docs, 5GB) but are using the much higher cluster defaults (200M docs, 15GB).
Is this the expected behavior? Should cluster.lifecycle.default.rollover settings override explicit ILM policy settings for ILM-managed indices? Or is this possibly a bug in the ECK operator or Elasticsearch 8.14.3?