Elasticsearch put all indices to read-only mode before upgrade

Hello,

Is is possible to put all indexes including all system to read-only mode ?
I was looking about that but could find only read-only mode to dedicated one index.

I'm doing test upgrade and everytime when I try to set indexes to read-only mode I have many hidden,
for example .task (probably from kibana) .ds

I need to put a setting in elasticsearch without anything else to put all indexes including .* to read-only mode to perform migration

I don't think this is possible, those system indices needs to be writable for them to be upgradable.

Why you want to mark them as read-only? There is no need to mark any index as read-only during upgrades.

During upgrade I received below errors like:
`

{"@timestamp":"2025-05-27T16:53:49.722Z", "log.level":"ERROR", "message":"fatal exception while booting Elasticsearch", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"main","log.logger":"org.elasticsearch.bootstrap.Elasticsearch","elasticsearch.node.name":"elastic-tst-0","elasticsearch.cluster.name":"tst-cluster","error.type":"java.lang.IllegalStateException","error.message":"The index [.reporting-2021.10.03/7qFTO5dhTDm4fdsinJ1kQw] created in version [7.8.0] with current compatibility version [7.8.0] must be marked as read-only using the setting [index.blocks.write] set to [true] before upgrading to 9.0.1.","error.stack_trace":"java.lang.IllegalStateException: The index [.reporting-2021.10.03/7qFTO5dhTDm4fdsinJ1kQw] created in version [7.8.0] with current compatibility version [7.8.0] must be marked as read-only using the setting [index.blocks.write] set to [true] before upgrading to 9.0.1.\n\tat org.elasticsearch.server@9.0.1/org.elasticsearch.cluster.metadata.IndexMetadataVerifier.isReadOnlySupportedVersion(IndexMetadataVerifier.java:180)\n\tat org.elasticsearch.server@9.0.1/org.elasticsearch.cluster.metadata.IndexMetadataVerifier.checkSupportedVersion(IndexMetadataVerifier.java:126)\n\tat org.elasticsearch.server@9.0.1/org.elasticsearch.cluster.metadata.IndexMetadataVerifier.verifyIndexMetadata(IndexMetadataVerifier.java:98)\n\tat org.elasticsearch.server@9.0.1/org.elasticsearch.gateway.GatewayMetaState.upgradeMetadata(GatewayMetaState.java:298)\n\tat org.elasticsearch.server@9.0.1/org.elasticsearch.gateway.GatewayMetaState.upgradeMetadataForNode(GatewayMetaState.java:285)\n\tat org.elasticsearch.server@9.0.1/org.elasticsearch.gateway.GatewayMetaState.createOnDiskPersistedState(GatewayMetaState.java:193)\n\tat org.elasticsearch.server@9.0.1/org.elasticsearch.gateway.GatewayMetaState.createPersistedState(GatewayMetaState.java:147)\n\tat org.elasticsearch.server@9.0.1/org.elasticsearch.gateway.GatewayMetaState.start(GatewayMetaState.java:105)\n\tat org.elasticsearch.server@9.0.1/org.elasticsearch.node.Node.start(Node.java:315)\n\tat org.elasticsearch.server@9.0.1/org.elasticsearch.bootstrap.Elasticsearch.start(Elasticsearch.java:648)\n\tat org.elasticsearch.server@9.0.1/org.elasticsearch.bootstrap.Elasticsearch.initPhase3(Elasticsearch.java:445)\n\tat org.elasticsearch.server@9.0.1/org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:102)\n"}

`
which means that index has compatibility issues,
I have many of them, can be deleted so I thought that I can use some setting in elasticsearch to mark all as read-only

I learned from elasticsearch documentation that in kibana there is upgrade assistant which will show me about indexes with compatibility issues.

I have a question by the way if I will mark as read-only - can I after upgrade mark those indexes mark back as write ? Or should I reindex those indexes ?

I am also facing the same issue even though set the index to read only as shown below - {
"elasticindex-res570-1" : {
"settings" : {
"index" : {
"mapping" : {
"total_fields" : {
"limit" : "2000"
}
},
"number_of_shards" : "1",
"blocks" : {
"write" : "true"
},
"provided_name" : "elasticindex-res570-1",
"creation_date" : "1771342542010",
"analysis" : {
"analyzer" : {
"fscrawler_path" : {
"tokenizer" : "fscrawler_path"
}
},
"tokenizer" : {
"fscrawler_path" : {
"type" : "path_hierarchy"
}
}
},
"number_of_replicas" : "1",
"uuid" : "i1UFCb1yQziQmTeH7UBrCQ",
"version" : {
"created" : "7080199"
}
}
}
}
} but this fails when I upgrade to 9.3.0 from 7.8.1 .. I am not directly migrating i am following the path 7.8.1 →7.17.0→8.19.0→9.3.0 it works till 8.19.0 but after 9.3.0 service starts giving error java.lang.IllegalStateException: The index [elasticindex-res570-1/i1UFCb1yQziQmTeH7UBrCQ] created in version [7.8.1] with current compatibility version [7.8.1] must be marked as read-only using the setting [index.blocks.write] set to [true] before upgrading to 9.3.0. … not sure how else I can set it to readonly and solve the issue

I suggest that you open a different topic as the original topic is pretty old.

Also, since you are changing major versions, are you using the Upgrade Assistant?

You also need to be on the last patch version for 8.19, so you need to be on 8.19.11 and run the upgrade assistant before upgrading.

Did you follow those steps?

2 Likes

I am not using Upgrade assistant and I am following the path 7.17.0 to 8.19.0 then 9.3.0 … and I am using poweshell script which takes pause for 300s before each Elasticsearch upgrade… to make the nodes stable. so are you suggesting I should move to 8.19.11 rather than 8.19.0 ? so path will be 7.17→8.19.11 → 9.3.0 ?

The recommendation is to always be on the latest patch of the previous release before upgrading to a major one and to also check the upgrade assistant on kibana to see if there is anything that can break during upgrade and leave your cluster with issues.

So, the recommendation would be to upgrade your cluster to 7.17.29, which is the latest patch version for branch 7.17, check the upgrade assistant in Kibana, if everything is all right, you can upgrade the cluster to 8.19.11, check the upgrade assistant again, and then upgrade to 9.3.0.

From what you shared it is not clear what you already upgraded or not, in which version is your cluster in now?

1 Like

What license (if any) do you have?

On assumption you are using basic (free) license, and you have 7.x created indices with data you want to keep + your intended final destination is 9.x --> this means you will need to reindex the created-in-7.x indices, at some point.

It's best to do this while running 8.x, e.g 8.19.11. That likely what upgrade assistant would have told you, had you checked.

Indeed, this is the most important question now.

Useful would also be output of a GET on _cluster/stats, specifically the indices.versions section - if you can just pipe the output to jq -Src '.indices.versions[]' and paste reply.

My (test) 9.x cluster shows

{"index_count":31,"primary_shard_count":31,"total_primary_bytes":716694773,"version":"8.11.0-8.11.4"}
{"index_count":30,"primary_shard_count":33,"total_primary_bytes":345643664,"version":"8.17.0-8.17.10"}
{"index_count":6,"primary_shard_count":6,"total_primary_bytes":13972,"version":"8.18.0-8.18.8"}
{"index_count":23,"primary_shard_count":23,"total_primary_bytes":14916487,"version":"9.0.0-9.0.8"}
{"index_count":9,"primary_shard_count":9,"total_primary_bytes":58633,"version":"9.1.0-9.1.8"}
{"index_count":33,"primary_shard_count":33,"total_primary_bytes":980824279,"version":"9.2.0-9.2.1"}
{"index_count":21,"primary_shard_count":21,"total_primary_bytes":2624932131,"version":"9.3.0"}

to give you an idea of what to expect. Note the version field is always 8.something or 9.something in my case.

Did you create any pre-upgradre snapshots?