The by far easiest way to delete all data is indeed to delete and recreate the indices. You can use the delete by query API, but this deletes individual documents and is much slower.
my indexs are in a distant server and they are created by such application with such rights, so i don't want to delete them to avoid having indexs down somewhere or some rights of new created ones are wrong. by the way i am using Ansible to manage my distant ES nodes.
i am thinking in paginating and looping over shards, but i face some problems until now
@liban We can get 1000 records at a time at most and we can increase that by using scroll API. I am doing that for my 20,00,000 records. You can implement the same type of functionality for yourself to delete records by creating a script of your own.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.