Failed indices in snapshot since upgrade to 8.5.3

Hello,

Since we upgraded Elasticsearch/Logstash/Kibana to 8.5.3 version yesterday, we have lot of failed indices in snapshots (559 indices are OK, but 168 failed).

Here is the error for each of them : INTERNAL_SERVER_ERROR: UnsupportedOperationException[Old formats can't be used for writing]

Please could you help me to solve this problem ?

Thanks!

Welcome to our community! :smiley:

Please share the full requests you are making as well as the full responses from Elasticsearch. You might also want to check the Elasticsearch logs and post them.

Hello,

Thank you!

I just run new snapshot from the GUI, and I can see a lot of Failed shared :

I also check /var/log/elasticsearch/elasticsearch.log but I can see nothing relevant...
Maybe I checked in the wrong log file ?

What was the earlier stack version ? May be there was reindexing required for your indices which hasn't been performed.
Can you dump your ES logs when you start snapshotting and share it across ?

We upgraded from 7.17.7 version of ELK.

I think we have no logs in elasticsearch.log file because we have 3 servers in cluster.

So if I check in the elk-cluster.log files on each node, I don't see any errors regarding snapshot.

I can see these errors :


[2023-01-11T15:16:56,505][WARN ][o.e.x.s.a.ApiKeyAuthenticator] [es-node-1] Authentication using apikey failed - api key [72XfDYMBMecjRi8RMj1Z] has been invalidated
[2023-01-11T15:17:23,557][WARN ][o.e.x.s.a.ApiKeyAuthenticator] [es-node-1] Authentication using apikey failed - api key [72XfDYMBMecjRi8RMj1Z] has been invalidated
[2023-01-11T15:17:32,210][WARN ][o.e.x.s.a.ApiKeyAuthenticator] [es-node-1] Authentication using apikey failed - api key [72XfDYMBMecjRi8RMj1Z] has been invalidated
[2023-01-11T15:17:48,754][WARN ][o.e.x.s.a.ApiKeyAuthenticator] [es-node-1] Authentication using apikey failed - api key [72XfDYMBMecjRi8RMj1Z] has been invalidated

[2023-01-11T15:04:25,229][INFO ][o.e.m.j.JvmGcMonitorService] [es-node-2] [gc][85957] overhead, spent [267ms] collecting in the last [1s]
[2023-01-11T15:10:27,665][INFO ][o.e.m.j.JvmGcMonitorService] [es-node-2] [gc][86319] overhead, spent [287ms] collecting in the last [1s]
[2023-01-11T15:17:19,028][INFO ][o.e.m.j.JvmGcMonitorService] [es-node-2] [gc][86730] overhead, spent [256ms] collecting in the last [1s]

[2023-01-11T14:21:33,421][INFO ][o.e.m.j.JvmGcMonitorService] [es-node-3] [gc][191528] overhead, spent [267ms] collecting in the last [1s]
[2023-01-11T15:00:29,311][INFO ][o.e.c.m.MetadataMappingService] [es-node-3] [logstash-adds-000230/j-cl7qdCSm-0mvowafkOqg] update_mapping [_doc]
[2023-01-11T15:05:20,286][INFO ][o.e.m.j.JvmGcMonitorService] [es-node-3] [gc][194150] overhead, spent [260ms] collecting in the last [1s]

May be you are hitting something similar to UnsupportedOperationException causes snapshots to fail - Elastic Stack / Elasticsearch - Discuss the Elastic Stack

I saw this thread, bu it seems there is no solution...

I don't understand why for exemple logstash-adds-000009 is failed and logstash-adds-000008 is not failed, while the two indexes were created a long time ago at a day's interval in the same index version.

I recreated the repository not in "source-only" mode, and it seems to be good now.
Snapshot is in progress without failed shards.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.