Since we upgraded Elasticsearch/Logstash/Kibana to 8.5.3 version yesterday, we have lot of failed indices in snapshots (559 indices are OK, but 168 failed).
Here is the error for each of them : INTERNAL_SERVER_ERROR: UnsupportedOperationException[Old formats can't be used for writing]
Please share the full requests you are making as well as the full responses from Elasticsearch. You might also want to check the Elasticsearch logs and post them.
What was the earlier stack version ? May be there was reindexing required for your indices which hasn't been performed.
Can you dump your ES logs when you start snapshotting and share it across ?
I think we have no logs in elasticsearch.log file because we have 3 servers in cluster.
So if I check in the elk-cluster.log files on each node, I don't see any errors regarding snapshot.
I can see these errors :
[2023-01-11T15:16:56,505][WARN ][o.e.x.s.a.ApiKeyAuthenticator] [es-node-1] Authentication using apikey failed - api key [72XfDYMBMecjRi8RMj1Z] has been invalidated
[2023-01-11T15:17:23,557][WARN ][o.e.x.s.a.ApiKeyAuthenticator] [es-node-1] Authentication using apikey failed - api key [72XfDYMBMecjRi8RMj1Z] has been invalidated
[2023-01-11T15:17:32,210][WARN ][o.e.x.s.a.ApiKeyAuthenticator] [es-node-1] Authentication using apikey failed - api key [72XfDYMBMecjRi8RMj1Z] has been invalidated
[2023-01-11T15:17:48,754][WARN ][o.e.x.s.a.ApiKeyAuthenticator] [es-node-1] Authentication using apikey failed - api key [72XfDYMBMecjRi8RMj1Z] has been invalidated
[2023-01-11T15:04:25,229][INFO ][o.e.m.j.JvmGcMonitorService] [es-node-2] [gc][85957] overhead, spent [267ms] collecting in the last [1s]
[2023-01-11T15:10:27,665][INFO ][o.e.m.j.JvmGcMonitorService] [es-node-2] [gc][86319] overhead, spent [287ms] collecting in the last [1s]
[2023-01-11T15:17:19,028][INFO ][o.e.m.j.JvmGcMonitorService] [es-node-2] [gc][86730] overhead, spent [256ms] collecting in the last [1s]
[2023-01-11T14:21:33,421][INFO ][o.e.m.j.JvmGcMonitorService] [es-node-3] [gc][191528] overhead, spent [267ms] collecting in the last [1s]
[2023-01-11T15:00:29,311][INFO ][o.e.c.m.MetadataMappingService] [es-node-3] [logstash-adds-000230/j-cl7qdCSm-0mvowafkOqg] update_mapping [_doc]
[2023-01-11T15:05:20,286][INFO ][o.e.m.j.JvmGcMonitorService] [es-node-3] [gc][194150] overhead, spent [260ms] collecting in the last [1s]
I saw this thread, bu it seems there is no solution...
I don't understand why for exemple logstash-adds-000009 is failed and logstash-adds-000008 is not failed, while the two indexes were created a long time ago at a day's interval in the same index version.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.