SparkSQL - all index are delete when not was enough space on ES file system


All my data are deleted when not was enough space on ES file system.
Is there any opction to have a checkpoint in ES for not lose the data that already have loaded?
The filesystem keeps full space but i can't access my index and i dont see number of documentos on index list.


I'm surprised this is happening as ES doesn't have a retention management pre-built in. I am expecting to see out of space than automatic deletion. Do you have a cron job, script, or curator running that manages the ES indices?

i dont have any manages my indexes.

Please provide the full ES elasticsearch.log.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.