All my data are deleted when not was enough space on ES file system.
Is there any opction to have a checkpoint in ES for not lose the data that already have loaded?
The filesystem keeps full space but i can't access my index and i dont see number of documentos on index list.
I'm surprised this is happening as ES doesn't have a retention management pre-built in. I am expecting to see out of space than automatic deletion. Do you have a cron job, script, or curator running that manages the ES indices?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.