Starts snapshot from the begining


We use a graylog with 10 elastic node backend.
ES version: elasticsearch-6.6.0 (same problem with prev versions also)
Snapshot repo: local "fs" repo, NFS mounted to a local folder
OS: CentOS 7
10 elastic nodes in one cluster, no set roles.

The problem:
We run snapshots every night from a cron job. Before the job, it delete the old ones. So we have a constant disk usage. We make snapshots per index.
But if we update a host OS (ES version not changes), and reboot it (eg. kernel update) the night snapshot jobs starts, and it eat all space, make all snapshots from the begining. So write the data to the disk again.
If we don't update nodes, it can run months without any problem.

ES starts automatically, but the NFS mounts with hand. So after the restart the ES starts with empty repo. We also tried to restart ES after the mount.

I tried to check logs, but I didn't see any errors, but there is a lot of logs, so I'm not sure, I'm right.
The restarted nodes' logs are empty at the time when the snapshots starts.

Have you got any idea where to start the debugging? Or have you seen same error before?

Thanks, Macko