Problem with upgrading to 9.0.0, index compability

Hi!

I'm trying to upgrade from 8.18 to 9.0 but I get a lot of these:

[2025-04-29T14:05:28,627][ERROR][o.e.b.Elasticsearch      ] [STHLM-KLARA-03] fatal exception while booting Elasticsearch
java.lang.IllegalStateException: The index [.kibana_task_manager_7.16.2_001/hz6nUaOdQLSrCSyNV8BdJQ] created in version [7.16.2] with current compatibility version [7.16.2] must be marked as read-only using the setting [index.blocks.write] set to [true] before upgrading to 9.0.0.
	at org.elasticsearch.cluster.metadata.IndexMetadataVerifier.isReadOnlySupportedVersion(IndexMetadataVerifier.java:180) ~[elasticsearch-9.0.0.jar:?]
	at org.elasticsearch.cluster.metadata.IndexMetadataVerifier.checkSupportedVersion(IndexMetadataVerifier.java:126) ~[elasticsearch-9.0.0.jar:?]
	at org.elasticsearch.cluster.metadata.IndexMetadataVerifier.verifyIndexMetadata(IndexMetadataVerifier.java:98) ~[elasticsearch-9.0.0.jar:?]
	at org.elasticsearch.gateway.GatewayMetaState.upgradeMetadata(GatewayMetaState.java:298) ~[elasticsearch-9.0.0.jar:?]
	at org.elasticsearch.gateway.GatewayMetaState.upgradeMetadataForNode(GatewayMetaState.java:285) ~[elasticsearch-9.0.0.jar:?]
	at org.elasticsearch.gateway.GatewayMetaState.createOnDiskPersistedState(GatewayMetaState.java:193) ~[elasticsearch-9.0.0.jar:?]
	at org.elasticsearch.gateway.GatewayMetaState.createPersistedState(GatewayMetaState.java:147) ~[elasticsearch-9.0.0.jar:?]
	at org.elasticsearch.gateway.GatewayMetaState.start(GatewayMetaState.java:105) ~[elasticsearch-9.0.0.jar:?]
	at org.elasticsearch.node.Node.start(Node.java:315) ~[elasticsearch-9.0.0.jar:?]
	at org.elasticsearch.bootstrap.Elasticsearch.start(Elasticsearch.java:647) ~[elasticsearch-9.0.0.jar:?]
	at org.elasticsearch.bootstrap.Elasticsearch.initPhase3(Elasticsearch.java:444) ~[elasticsearch-9.0.0.jar:?]
	at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:101) ~[elasticsearch-9.0.0.jar:?]

Running this:

GET _cat/indices/.kibana_task_manager_7.16.2_001
DELETE /.kibana_task_manager_7.16.2_001

Gets me

 "type": "index_not_found_exception",
    "reason": "no such index [.kibana_task_manager_7.16.2_001]",

So it seems that there is no index but still the service won't start.
How can I get Elastic to just skip these?

These are old indices that can be ignored.

Thanks!

/Kristoffer

Did you run the upgrade assistant before upgrading?

The Upgrade Assistant also helps resolve issues with older indices created before version 8.0.0, providing options to reindex older indices or mark them as read-only.

1 Like

Hi!

I looked at it but only found some issues about [xpack.monitoring.collection.enabled] will be removed. Nothing about incomaptible indexes. Should that be shown there?

If I create a snapshot of an old index, can that be restored after upgrade?

/Kristoffer

I guess it should, according to the documentation :wink:

If I create a snapshot of an old index, can that be restored after upgrade?

I think you can run an update by query in place so that will rewrite all documents using a recent version of Lucene.
Then you can backup your index and restore it into your new cluster.

A reindex from remote would work as well if you have both clusters running at the same time.

upgrade assistant is not showing this. I have 8.16 and it does not show anything. even though I have old .kibana index in cluster.

You need to upgrade to 8.18 first and run the upgrade assistant from there. Upgrading from 8.16 directly to 9.0 is not supported.

3 Likes

Hi,

I have upgraded to 8.18.1 and Assistant does not show any problems.

But the upgrade to 9.0.0 fails because 2 legacy indexes were created with older versions.
Once one node has been shut down and upgraded, setting those indexes to read-only has no effect, neither does deleting them. The node that was upgraded refuses to start up giving the above error (even though the indexes have been deleted).

Any ideas how to proceed?

OK, I think I have found the solution.

If you shut-down a node (for upgrade), then update the cluster (delete an index or set an index to read-only), and finally upgrade the shutdown node to 9.0, the shut-down node does not fetch fresh state from master on restart but tries to re-use whatever state it had for bootstrapping. This probably causes the errors to remain even though indices have been deleted from the cluster.

I fixed this by downgrading the node back to 8.18.1 and then restarting it. Once recovery is complete, you can start the rolling upgrade process again without any problem.