Upgrade from 6.8.3 to 7.X.X results in "failed to apply updated cluster state"

Hello.

We had a 6.8.3 single node with a lot of data (yes, I know that offers no fault tolerance but its a known limitation) and I tried to upgrade it to 7.4.2.

We followed the step by step in https://www.elastic.co/guide/en/elasticsearch/reference/current//rolling-upgrades.html but when we started the node again it resulted in the node being elected as master since I set discovery.type: single-node but then it fails to apply the cluster state.

Since the I tried every option I could think of including downgrading to 7.3.0 and 7.2.0 but the log messages for the failure are always the same and the cluster does not recover the indices.

The log messages since one start of the node are in the following link:

The config is the following (taking out commented lines):
cluster.name: redroom-redacted
node.name: radb
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
bootstrap.memory_lock: false
network.host: 127.0.0.1

discovery.type: single-node

indices.query.bool.max_clause_count: 8192

xpack.monitoring.enabled: false

The node has 16G in -Xxmx and -Xxms.

I really am trying not to have to loose data.

Thank you for any help in advance.

[2019-11-08T11:11:50,087][INFO ][o.e.c.s.ClusterSettings  ] [radb] updating [cluster.metadata.unsafe-bootstrap] from [] to [true]

This indicates you have used the elasticsearch-node unsafe-bootstrap tool, and confirmed that you accepted its dire consequences:

You should only run this tool if you have permanently lost half or more
of the master-eligible nodes in this cluster, and you cannot restore the
cluster from a snapshot. This tool can cause arbitrary data loss and its
use should be your last resort.

You have ended up with a cluster state that seems to be internally inconsistent. I don't think there's a way to fix that.

Since you followed the instructions for an upgrade you should have a snapshot from which you can restore your cluster:

  1. Back up your data by taking a snapshot!

I recommend restoring your cluster from that.

Hello.

Yes, i did use that tool but the cluster was already displaying the same behaviour before that with exactly the same log messages.

Those log messages appeared right after i restarted the node in 7.4.2 right after I upgraded the elasticsearch package from 6.8.3 to 7.4.2. I used the tool after when I was already out of ideas and had already downgraded to 7.3 and 7.2 with no change to my situation.

I cannot restore from a snapshot. There is no disk space in that machine to fully backup the data.

If I have to restore from the backups I have, I will lose about 1 month of data.

Is there no indication of what might be wrong on the log messages ? I can see the node being elected master and it sees all the indexes because it lists them in a gigantic log message. If i enable TRACE logging in MetadataService (i think thats the name), I can see it writing the cluster state to all indices. I don't know why it fails.

Regards,

Ricardo Ferreira

The message indicates that this suricata-1.0.1-2019-07-23 index is in some strange half-deleted state. I have been looking at it for a while and haven't been able to work out exactly what that state is. Could you set logger.org.elasticsearch.cluster.coordination: TRACE and logger.org.elasticsearch.cluster.service: TRACE and restart your node, then share the resulting logs covering the first couple of minutes after it restarts?

Ok I have managed to construct a test case that reproduces some behaviour that looks like the one that you are seeing. Is it possible that the suricata-1.0.1-2019-07-23 index was deleted from the 6.8.3 node shortly before you stopped the node to upgrade it?

Hello.

Sorry for the late reply and thanks for the time you're spending looking at this.

Yes, that is possible but it should not be very recent. Its an index from July so it should have been deleted at 2019-10-23 since we have that type of index set to 2 months retention in our script. The script runs every day.

The logs with the TRACE logging set are in the following link:

Thanks again.

Thanks, that's helpful. You seem to have suricata-1.0.1-* indices for every day going back to 2019-07-23 so it looks like your script to delete them after two months has not been working. Has it been recording any errors and, if, so, can you share them?

Do you have the Elasticsearch logs from before the restart? Specifically, I am looking for a single message of the following form emitted shortly after the node starts up when it was running version 6.8.3:

[2019-11-09T15:57:39,027][WARN ][o.e.c.s.ClusterApplierService] [node-0] failed to notify ClusterStateApplier
java.lang.IllegalStateException: Cannot delete index [[i/2Kcjw8u7Taeq413koifgTg]], it is still part of the cluster state.
        at org.elasticsearch.indices.IndicesService.verifyIndexIsDeleted(IndicesService.java:879) ~[elasticsearch-6.8.3.jar:6.8.3]
        at org.elasticsearch.indices.cluster.IndicesClusterStateService.deleteIndices(IndicesClusterStateService.java:338) ~[elasticsearch-6.8.3.jar:6.8.3]
        at org.elasticsearch.indices.cluster.IndicesClusterStateService.applyClusterState(IndicesClusterStateService.java:258) ~[elasticsearch-6.8.3.jar:6.8.3]
        at org.elasticsearch.cluster.service.ClusterApplierService.lambda$callClusterStateAppliers$6(ClusterApplierService.java:484) [elasticsearch-6.8.3.jar:6.8.3]
        at java.lang.Iterable.forEach(Iterable.java:75) [?:?]
        at org.elasticsearch.cluster.service.ClusterApplierService.callClusterStateAppliers(ClusterApplierService.java:481) [elasticsearch-6.8.3.jar:6.8.3]
        at org.elasticsearch.cluster.service.ClusterApplierService.applyChanges(ClusterApplierService.java:468) [elasticsearch-6.8.3.jar:6.8.3]
        at org.elasticsearch.cluster.service.ClusterApplierService.runTask(ClusterApplierService.java:419) [elasticsearch-6.8.3.jar:6.8.3]
        at org.elasticsearch.cluster.service.ClusterApplierService$UpdateTask.run(ClusterApplierService.java:163) [elasticsearch-6.8.3.jar:6.8.3]
        at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:681) [elasticsearch-6.8.3.jar:6.8.3]
        at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252) [elasticsearch-6.8.3.jar:6.8.3]
        at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215) [elasticsearch-6.8.3.jar:6.8.3]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
        at java.lang.Thread.run(Thread.java:835) [?:?]

Hello.

Unfortunately I no longer have logs from before the restart because of the verbose nature of the logs emitted by elastic about the error. It has rotated the logs away.

I can't know if the script succcessfully deleted the indexes or not. I only have logs of it running at November 4 because earlier logs have been rotated.

Ok, that's unfortunate as it means we can't be certain about what the fundamental problem is and therefore how we might be able to recover, but here's my guess. It looks like in 6.x there is a race between completely deleting an index and shutting down a node that can leave it in a weird state where the index is only half-deleted. The race was fixed in 7.0 and we assume that we're never in this weird state any more, but unfortunately that's not true if you upgrade while already in this state.

I opened #48918 to handle the bad state better during the upgrade, which will allow you to recover the data from this cluster. While you're waiting for that to be merged and released you might like to restore the data from your most recent backup into a new cluster.

For future reference I really would not recommend trying elasticsearch-node unsafe-bootstrap unless you are certain it will help. It really can lead to data loss, and you weren't in the kind of situation it's supposed to be used in. I hope it hasn't done too much damage.

I would also recommend adjusting your snapshot strategy to better match your RPO. Elasticsearch snapshots are incremental so do not normally need very much extra space, and are designed to keep data offsite (e.g. on S3 or on a network filesystem) for when local space is limited.

Hello.

Ok, i will start the node with the backup and keep the old data stored.
When there's a new release with that code included I will attempt to start a side cluster with the corrupt data and if successful transfer the data from one to the other.

Thank you for the help.

1 Like

Hello again.

I noticed that the patch was in the 7.5 branch so I cloned the elastic repo and compiled it from source and tried to start the node.
Unfortunately it still says exactly the same log messages (as far as I can see).
The log messages are available in the following link:

I checked the patch and for it to be working as I understand it should be printing "[{}] found metadata for deleted index [{}]" messages to the log but it isn't.

Regards,
Ricardo Ferreira

I do not recommend using a -SNAPSHOT build on your production data. It might leave your data in a worse state, or it might be impossible to subsequently upgrade to a proper release build (or even another -SNAPSHOT build).

The changes in #48918 only affect upgrades from 6.x so will have no effect on your cluster since it looks like it has already been upgraded. When 7.5.0 is released I'll be in touch to help you fool Elasticsearch into thinking it's doing an upgrade again.

Hello.

Ok, I was under the impression it would work in my situation.

I'll wait for 7.5 and your reply then.

Thank you.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.