"file being used by another process" error during server backup

Hello.

In one of our elasticsearch 8.14 database instance, we have an issue when the automatic backup is running:

[WARN ][o.e.i.e.Engine           ] [WIN-MYSERVER] [00000001_ts_202506][0] failed engine [lucene commit failed]
java.nio.file.FileSystemException: C:\data\es7\data\indices\Am7RsRL7QvyX0sNxYLnlmQ\0\index\_f80x.cfs: The process cannot access the file because it is being used by another process
[WARN ][o.e.i.e.Engine           ] [WIN-MYSERVER] [00000001_ts_202506][0] failed engine [lucene commit failed]
java.nio.file.FileSystemException: C:\data\es7\data\indices\Am7RsRL7QvyX0sNxYLnlmQ\0\index\_f80x.cfs: The process cannot access the file because it is being used by another process
[WARN ][o.e.i.c.IndicesClusterStateService] [WIN-MYSERVER] [00000001_ts_202506][0] marking and sending shard failed due to [shard failure, reason [lucene commit failed]]
java.nio.file.FileSystemException: C:\data\es7\data\indices\Am7RsRL7QvyX0sNxYLnlmQ\0\index\_f80x.cfs: The process cannot access the file because it is being used by another process
[WARN ][o.e.c.r.a.AllocationService] [WIN-MYSERVER] failing shard [FailedShard[routingEntry=[00000001_ts_202506][0], node[2vlk50o5QBetwtY0_eUGtw], [P], s[STARTED], a[id=MqSIRpF5QG6nPKTfVGelUQ], failed_attempts[0], message=shard failure, reason [lucene commit failed], failure=java.nio.file.FileSystemException: C:\data\es7\data\indices\Am7RsRL7QvyX0sNxYLnlmQ\0\index\_f80x.cfs: The process cannot access the file because it is being used by another process, markAsStale=true]]
java.nio.file.FileSystemException: C:\data\es7\data\indices\Am7RsRL7QvyX0sNxYLnlmQ\0\index\_f80x.cfs: The process cannot access the file because it is being used by another process
[WARN ][o.e.i.e.Engine           ] [WIN-MYSERVER] [00000001_ts_202506][0] failed engine [lucene commit failed]
java.nio.file.FileSystemException: C:\data\es7\data\indices\Am7RsRL7QvyX0sNxYLnlmQ\0\index\_f80x.cfs: The process cannot access the file because it is being used by another process
[WARN ][o.e.i.c.IndicesClusterStateService] [WIN-MYSERVER] [00000001_ts_202506][0] marking and sending shard failed due to [shard failure, reason [lucene commit failed]]
java.nio.file.FileSystemException: C:\data\es7\data\indices\Am7RsRL7QvyX0sNxYLnlmQ\0\index\_f80x.cfs: The process cannot access the file because it is being used by another process
[WARN ][o.e.i.e.Engine           ] [WIN-MYSERVER] [00000001_ts_202506][0] tried to fail engine but engine is already failed. ignoring. [failed to recover from translog]
org.elasticsearch.index.engine.FlushFailedEngineException: Flush failed


This error occurs when our application try to make some bulk insert in the database. In this case, the bulk insert fails.

So my question are

  • is this error relative to the backup process ?
  • is there a correct way to backup the server while Elasticsearch is running without locking any file ?
  • All the other queries (get documents and update documents) are successfull. Is there a correct way to bulk insert to avoid this issue ?

Thanks in advance.
Stephen

The only supported backup method for Elasticsearch is through the snapshot API. All types of file system level backups are not supported and may not even be possible to restore.

This backup method can be used while the cluster is operating as normal and does not suffer from the issues you described.

1 Like