Hi, I wanted to create a simple two-node hot-warm cluster for quick experimentation. One ("warm") with already 100+GB of sample data and the other ("hot") with zero data initially. But I forgot to set shard filtering for some of the indices so they also exist on the hot node. So I corrected it and it seemed to have been moved to the right node. However, when I check the disk usage, it still uses the same amount of disk space (about 1 gigabyte) as before even though I told the shards to go to the "warm" node. If I delete the data folder and re-run the "hot" node, the size becomes only a few hundred kilobytes as expected. Does anybody know what the problem is here or if there is an API to claim the disk space?
I'm using Elasticsearch 2.3.1.
Thanks!