Missing closed indexes after adding node

Hey guys,

New to Elasticsearch but already a huge fan. I had a strange incident
happen I was hoping you could provide some insight on.

I set up a single Elastic search node for a project and collected some data
in it for a few days to make sure everything was working correctly. No
issues. I went through and closed those indexes from my testing, and added
a second node to the cluster. When I did that...POOF....All the closed
indexes disappeared. No big deal to me, but I can see the disk space is
still being used by those indexes. They didn't replicate to the second
node (the two VMs are identical) because the disk space usage over there is
much lower. I don't care about the data, is there any way I can either
recover the indexes and properly purge them, or just remove them from disk
by some other method? I don't really care about the data, but would to get
the space back.

Thanks in advance for any help!

-Russell

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/89a3ebcd-4dde-4d16-8458-2bb372b2870f%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

The data is there, it's just closed, and there are no actions taken on
closed indexes.
You need to reopen them -

On 17 January 2015 at 02:35, Russell Butturini tcstool@gmail.com wrote:

Hey guys,

New to Elasticsearch but already a huge fan. I had a strange incident
happen I was hoping you could provide some insight on.

I set up a single Elastic search node for a project and collected some
data in it for a few days to make sure everything was working correctly. No
issues. I went through and closed those indexes from my testing, and added
a second node to the cluster. When I did that...POOF....All the closed
indexes disappeared. No big deal to me, but I can see the disk space is
still being used by those indexes. They didn't replicate to the second
node (the two VMs are identical) because the disk space usage over there is
much lower. I don't care about the data, is there any way I can either
recover the indexes and properly purge them, or just remove them from disk
by some other method? I don't really care about the data, but would to get
the space back.

Thanks in advance for any help!

-Russell

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/89a3ebcd-4dde-4d16-8458-2bb372b2870f%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/89a3ebcd-4dde-4d16-8458-2bb372b2870f%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAEYi1X8-H89cVUyS2D%3DBRp4b9ADkSUiscypnhAdJPViNV_D67Q%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

These indices don't appear when I list all on the server. I can see other closed indices but not the ones that disappeared.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/a81c2ce1-706c-4574-9177-4cf86e4dcc88%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.