Keep losing documents

I was a little worried about the "magic" that ElasticSearch provides.
Here is an example why ...

After several days of indexing, I ran into an Exception reporting too
many open files. I fixed the issue on the system and restarted ONE of
my THREE nodes. Before the restart, I had 12.7M documents. After the
restart, I have 5.1M documents. I'm also have an S3 Gateway. It also
turns out (likely related to the file issue) that S3 snapshot were
failing, but only this morning and I hadn't indexed anything new for
about 14-20 hours.

Losing indexed documents like this is worrisome and this isn't the
first time this has happened. When I first started playing with
ElasticSearch, I did a test where I loaded documents onto two servers
and dropped one of the servers. I lost half my documents.

I'm curious, how is ElasticSearch being tested at scale?

I guess the upside is that now that I've lost all 12M of my documents
I can upgrade to 0.9.0.

How do I restore the index from the gateway?

On Jul 29, 12:01 pm, David Jensen djense...@gmail.com wrote:

I was a little worried about the "magic" that Elasticsearch provides.
Here is an example why ...

After several days of indexing, I ran into an Exception reporting too
many open files. I fixed the issue on the system and restarted ONE of
my THREE nodes. Before the restart, I had 12.7M documents. After the
restart, I have 5.1M documents. I'm also have an S3 Gateway. It also
turns out (likely related to the file issue) that S3 snapshot were
failing, but only this morning and I hadn't indexed anything new for
about 14-20 hours.

Losing indexed documents like this is worrisome and this isn't the
first time this has happened. When I first started playing with
Elasticsearch, I did a test where I loaded documents onto two servers
and dropped one of the servers. I lost half my documents.

I'm curious, how is Elasticsearch being tested at scale?

You can't restore from the gateway directly. Basically you'll need to read
all docs from old index and write to new one.

Regards,
Berkay Mollamustafaoglu
mberkay on yahoo, google and skype

On Thu, Jul 29, 2010 at 3:15 PM, David Jensen djensen47@gmail.com wrote:

I guess the upside is that now that I've lost all 12M of my documents
I can upgrade to 0.9.0.

How do I restore the index from the gateway?

On Jul 29, 12:01 pm, David Jensen djense...@gmail.com wrote:

I was a little worried about the "magic" that Elasticsearch provides.
Here is an example why ...

After several days of indexing, I ran into an Exception reporting too
many open files. I fixed the issue on the system and restarted ONE of
my THREE nodes. Before the restart, I had 12.7M documents. After the
restart, I have 5.1M documents. I'm also have an S3 Gateway. It also
turns out (likely related to the file issue) that S3 snapshot were
failing, but only this morning and I hadn't indexed anything new for
about 14-20 hours.

Losing indexed documents like this is worrisome and this isn't the
first time this has happened. When I first started playing with
Elasticsearch, I did a test where I loaded documents onto two servers
and dropped one of the servers. I lost half my documents.

I'm curious, how is Elasticsearch being tested at scale?

Well, I actually had a complete cluster meltdown. I'm not sure when
the magic happened but all of my documents were restored from the
Gateway and it was fairly snappy too but my index is only 21GB.

The magic restore either happened automatically or it happened when I
invoked http://localhost:9200/indexname/_refresh. My guess would be
the former.

On Jul 29, 12:16 pm, Berkay Mollamustafaoglu mber...@gmail.com
wrote:

You can't restore from the gateway directly. Basically you'll need to read
all docs from old index and write to new one.

Regards,
Berkay Mollamustafaoglu
mberkay on yahoo, google and skype

On Thu, Jul 29, 2010 at 3:15 PM, David Jensen djense...@gmail.com wrote:

I guess the upside is that now that I've lost all 12M of my documents
I can upgrade to 0.9.0.

How do I restore the index from the gateway?

On Jul 29, 12:01 pm, David Jensen djense...@gmail.com wrote:

I was a little worried about the "magic" that Elasticsearch provides.
Here is an example why ...

After several days of indexing, I ran into an Exception reporting too
many open files. I fixed the issue on the system and restarted ONE of
my THREE nodes. Before the restart, I had 12.7M documents. After the
restart, I have 5.1M documents. I'm also have an S3 Gateway. It also
turns out (likely related to the file issue) that S3 snapshot were
failing, but only this morning and I hadn't indexed anything new for
about 14-20 hours.

Losing indexed documents like this is worrisome and this isn't the
first time this has happened. When I first started playing with
Elasticsearch, I did a test where I loaded documents onto two servers
and dropped one of the servers. I lost half my documents.

I'm curious, how is Elasticsearch being tested at scale?

First, if you upgrade to 0.9, you will need to reindex your data. And
upgrade to 0.9.

Second, lets get all this so called "magic" attitude out of the way. Are you
suggesting that intentionally this information is being hidden? If no, then
please stop. if yes, then it takes about 5 minutes to do a search on the
mailing list / docs / talks and find the answers you are looking for.

In any case, lets write it again... . When you create an index in
elasticsearch, the index is broken down into shards. A shard can have 0 or
more replicas. Shards and their replicas are allocated to different nodes,
while elasticsearch makes sure to not allocate a shard and its replica to
the same node.

Within a single shard and its replicas, a primary is chosen. One of the
primary main purpose is to perform the scheduled snapshot operations from
the shard index to the gateway.

When a primary shard is first allocated to a node, it performs a recovery
from the gateway. When a replica of the same shard is allocated, it recovers
its state from the primary. If another node is started, and the primary
needs to relocate to a different node to keep the number of shards balanced
across nodes, then it will do a hot relocation, not another recovery from
the gateway.

If you want to know where things are allocate, there is a simple API, the
cluster state API, that gives you information about all the different
indices, shards, replicas, where they are allocated, what their state is and
so on.

Back to the problems you described. First, the s3 gateway. In 0.8, jclouds
was used. If you follow the mailing list, you will see that I issued several
times warnings that it seems to be misbehaving, and I am going to replace it
in 0.9. In 0.9, I went with the Amazon formal SDK, so hoping for better
things now... .

Second, there were several bugs in 0.8 that were fixed in 0.9. One of the
more major ones is the fact that when a node where a primary shard was
allocated was shutdown, than a replica will not become primary (as it
should), but a new primary will be allocated, and a full recovery from the
gateway will happen. This is bad for two reasons. First, it means that
allocation is much slower, and the second, if the gateway misbehaves as the
jclouds case, then the recovery might not work.

Last, regarding being tested at scale. What you are testing is not scale (3
nodes). There are bugs, they are being fixed, but for example, the problems
you were having are tested using simple 3 nodes automatic integration tests.
As for scale, I do some testings on ec2, as well as other kind elasticsearch
users who are running the system at scale and provide valuable information
(and not magic) back to me and help fix any problems.

-shay.banon

On Thu, Jul 29, 2010 at 10:52 PM, David Jensen djensen47@gmail.com wrote:

Well, I actually had a complete cluster meltdown. I'm not sure when
the magic happened but all of my documents were restored from the
Gateway and it was fairly snappy too but my index is only 21GB.

The magic restore either happened automatically or it happened when I
invoked http://localhost:9200/indexname/_refresh. My guess would be
the former.

On Jul 29, 12:16 pm, Berkay Mollamustafaoglu mber...@gmail.com
wrote:

You can't restore from the gateway directly. Basically you'll need to
read
all docs from old index and write to new one.

Regards,
Berkay Mollamustafaoglu
mberkay on yahoo, google and skype

On Thu, Jul 29, 2010 at 3:15 PM, David Jensen djense...@gmail.com
wrote:

I guess the upside is that now that I've lost all 12M of my documents
I can upgrade to 0.9.0.

How do I restore the index from the gateway?

On Jul 29, 12:01 pm, David Jensen djense...@gmail.com wrote:

I was a little worried about the "magic" that Elasticsearch provides.
Here is an example why ...

After several days of indexing, I ran into an Exception reporting too
many open files. I fixed the issue on the system and restarted ONE of
my THREE nodes. Before the restart, I had 12.7M documents. After the
restart, I have 5.1M documents. I'm also have an S3 Gateway. It also
turns out (likely related to the file issue) that S3 snapshot were
failing, but only this morning and I hadn't indexed anything new for
about 14-20 hours.

Losing indexed documents like this is worrisome and this isn't the
first time this has happened. When I first started playing with
Elasticsearch, I did a test where I loaded documents onto two servers
and dropped one of the servers. I lost half my documents.

I'm curious, how is Elasticsearch being tested at scale?

First, thank you. This response is great; can you (or I) post it
someplace else? I don't think the blog post has the same amount of
detail here.

I'm know I'm not testing at scale; it was a question.

The following is my opinion, you may not share it and that is fine. I
think that forums are great for discussion but they suck as an
information repository. They sometimes can help with a specific error
case. Discussion boards and mailing lists are not a replacement for
documentation but a great to augment. I understand the information
isn't being intentionally hidden but it's very hard to find. I'm sure
if YOU, the committer, search the mailing list, you'll find the
answers right away. But you have all of the knowledge of the system
and know exactly what to search for. It might take you 5 minutes but
it's me (maybe others) much longer. Furthermore, if I see a topic like
"keep losing documents" in the results, I might, erroneously skip
that. As usual, the problem isn't the technology, it's the
communication. If the "documentation" is buried in discussions, I'll
just live with it, no other choice.

Finally, if you would like, I can spend a little time writing the kind
of docs that I would like to be able to read on the subject. The
caveat is that I'll be asking a high volume of questions.

On Jul 29, 2:41 pm, Shay Banon shay.ba...@elasticsearch.com wrote:

First, if you upgrade to 0.9, you will need to reindex your data. And
upgrade to 0.9.

Second, lets get all this so called "magic" attitude out of the way. Are you
suggesting that intentionally this information is being hidden? If no, then
please stop. if yes, then it takes about 5 minutes to do a search on the
mailing list / docs / talks and find the answers you are looking for.

In any case, lets write it again... . When you create an index in
elasticsearch, the index is broken down into shards. A shard can have 0 or
more replicas. Shards and their replicas are allocated to different nodes,
while elasticsearch makes sure to not allocate a shard and its replica to
the same node.

Within a single shard and its replicas, a primary is chosen. One of the
primary main purpose is to perform the scheduled snapshot operations from
the shard index to the gateway.

When a primary shard is first allocated to a node, it performs a recovery
from the gateway. When a replica of the same shard is allocated, it recovers
its state from the primary. If another node is started, and the primary
needs to relocate to a different node to keep the number of shards balanced
across nodes, then it will do a hot relocation, not another recovery from
the gateway.

If you want to know where things are allocate, there is a simple API, the
cluster state API, that gives you information about all the different
indices, shards, replicas, where they are allocated, what their state is and
so on.

Back to the problems you described. First, the s3 gateway. In 0.8, jclouds
was used. If you follow the mailing list, you will see that I issued several
times warnings that it seems to be misbehaving, and I am going to replace it
in 0.9. In 0.9, I went with the Amazon formal SDK, so hoping for better
things now... .

Second, there were several bugs in 0.8 that were fixed in 0.9. One of the
more major ones is the fact that when a node where a primary shard was
allocated was shutdown, than a replica will not become primary (as it
should), but a new primary will be allocated, and a full recovery from the
gateway will happen. This is bad for two reasons. First, it means that
allocation is much slower, and the second, if the gateway misbehaves as the
jclouds case, then the recovery might not work.

Last, regarding being tested at scale. What you are testing is not scale (3
nodes). There are bugs, they are being fixed, but for example, the problems
you were having are tested using simple 3 nodes automatic integration tests.
As for scale, I do some testings on ec2, as well as other kind elasticsearch
users who are running the system at scale and provide valuable information
(and not magic) back to me and help fix any problems.

-shay.banon

On Thu, Jul 29, 2010 at 10:52 PM, David Jensen djense...@gmail.com wrote:

Well, I actually had a complete cluster meltdown. I'm not sure when
the magic happened but all of my documents were restored from the
Gateway and it was fairly snappy too but my index is only 21GB.

The magic restore either happened automatically or it happened when I
invokedhttp://localhost:9200/indexname/_refresh. My guess would be
the former.

On Jul 29, 12:16 pm, Berkay Mollamustafaoglu mber...@gmail.com
wrote:

You can't restore from the gateway directly. Basically you'll need to
read
all docs from old index and write to new one.

Regards,
Berkay Mollamustafaoglu
mberkay on yahoo, google and skype

On Thu, Jul 29, 2010 at 3:15 PM, David Jensen djense...@gmail.com
wrote:

I guess the upside is that now that I've lost all 12M of my documents
I can upgrade to 0.9.0.

How do I restore the index from the gateway?

On Jul 29, 12:01 pm, David Jensen djense...@gmail.com wrote:

I was a little worried about the "magic" that Elasticsearch provides.
Here is an example why ...

After several days of indexing, I ran into an Exception reporting too
many open files. I fixed the issue on the system and restarted ONE of
my THREE nodes. Before the restart, I had 12.7M documents. After the
restart, I have 5.1M documents. I'm also have an S3 Gateway. It also
turns out (likely related to the file issue) that S3 snapshot were
failing, but only this morning and I hadn't indexed anything new for
about 14-20 hours.

Losing indexed documents like this is worrisome and this isn't the
first time this has happened. When I first started playing with
Elasticsearch, I did a test where I loaded documents onto two servers
and dropped one of the servers. I lost half my documents.

I'm curious, how is Elasticsearch being tested at scale?

@David:
Maybe although your elastichsearch was already up, it was still trying to
repopulate your gateway's indices? I experienced that before but for hdfs,
and it takes awhile before it can repopulate my gateway's indices.

@Shay
Pardon me for my stupid question, but what are the indices in
$ES_HOME/work/elasticsearch/nodes (and in 0.8
$ES_HOME/work/elasticsearch/indices) and the indices in the gateway? Which
one of these are the primary shard indices and which ones are the replicas?
..Or are these something else?

My assumption before was that all indices are stored in the gateway. But
when I clear out my gateway (i.e. in my hdfs, when I delete the directory I
assigned to ES and when i reformat the node), it gets repopulated (my guess
is from $ES_HOME/work/elasticsearch/nodes. because I have to delete those as
well clear out my entire indices).

Thanks,

Franz Allan Valencia See | Java Software Engineer
franz.see@gmail.com
LinkedIn: http://www.linkedin.com/in/franzsee
Twitter: http://www.twitter.com/franz_see

On Fri, Jul 30, 2010 at 5:41 AM, Shay Banon shay.banon@elasticsearch.comwrote:

First, if you upgrade to 0.9, you will need to reindex your data. And
upgrade to 0.9.

Second, lets get all this so called "magic" attitude out of the way. Are
you suggesting that intentionally this information is being hidden? If no,
then please stop. if yes, then it takes about 5 minutes to do a search on
the mailing list / docs / talks and find the answers you are looking for.

In any case, lets write it again... . When you create an index in
elasticsearch, the index is broken down into shards. A shard can have 0 or
more replicas. Shards and their replicas are allocated to different nodes,
while elasticsearch makes sure to not allocate a shard and its replica to
the same node.

Within a single shard and its replicas, a primary is chosen. One of the
primary main purpose is to perform the scheduled snapshot operations from
the shard index to the gateway.

When a primary shard is first allocated to a node, it performs a recovery
from the gateway. When a replica of the same shard is allocated, it recovers
its state from the primary. If another node is started, and the primary
needs to relocate to a different node to keep the number of shards balanced
across nodes, then it will do a hot relocation, not another recovery from
the gateway.

If you want to know where things are allocate, there is a simple API, the
cluster state API, that gives you information about all the different
indices, shards, replicas, where they are allocated, what their state is and
so on.

Back to the problems you described. First, the s3 gateway. In 0.8, jclouds
was used. If you follow the mailing list, you will see that I issued several
times warnings that it seems to be misbehaving, and I am going to replace it
in 0.9. In 0.9, I went with the Amazon formal SDK, so hoping for better
things now... .

Second, there were several bugs in 0.8 that were fixed in 0.9. One of the
more major ones is the fact that when a node where a primary shard was
allocated was shutdown, than a replica will not become primary (as it
should), but a new primary will be allocated, and a full recovery from the
gateway will happen. This is bad for two reasons. First, it means that
allocation is much slower, and the second, if the gateway misbehaves as the
jclouds case, then the recovery might not work.

Last, regarding being tested at scale. What you are testing is not scale (3
nodes). There are bugs, they are being fixed, but for example, the problems
you were having are tested using simple 3 nodes automatic integration tests.
As for scale, I do some testings on ec2, as well as other kind elasticsearch
users who are running the system at scale and provide valuable information
(and not magic) back to me and help fix any problems.

-shay.banon

On Thu, Jul 29, 2010 at 10:52 PM, David Jensen djensen47@gmail.comwrote:

Well, I actually had a complete cluster meltdown. I'm not sure when
the magic happened but all of my documents were restored from the
Gateway and it was fairly snappy too but my index is only 21GB.

The magic restore either happened automatically or it happened when I
invoked http://localhost:9200/indexname/_refresh. My guess would be
the former.

On Jul 29, 12:16 pm, Berkay Mollamustafaoglu mber...@gmail.com
wrote:

You can't restore from the gateway directly. Basically you'll need to
read
all docs from old index and write to new one.

Regards,
Berkay Mollamustafaoglu
mberkay on yahoo, google and skype

On Thu, Jul 29, 2010 at 3:15 PM, David Jensen djense...@gmail.com
wrote:

I guess the upside is that now that I've lost all 12M of my documents
I can upgrade to 0.9.0.

How do I restore the index from the gateway?

On Jul 29, 12:01 pm, David Jensen djense...@gmail.com wrote:

I was a little worried about the "magic" that Elasticsearch
provides.
Here is an example why ...

After several days of indexing, I ran into an Exception reporting
too
many open files. I fixed the issue on the system and restarted ONE
of
my THREE nodes. Before the restart, I had 12.7M documents. After the
restart, I have 5.1M documents. I'm also have an S3 Gateway. It also
turns out (likely related to the file issue) that S3 snapshot were
failing, but only this morning and I hadn't indexed anything new for
about 14-20 hours.

Losing indexed documents like this is worrisome and this isn't the
first time this has happened. When I first started playing with
Elasticsearch, I did a test where I loaded documents onto two
servers
and dropped one of the servers. I lost half my documents.

I'm curious, how is Elasticsearch being tested at scale?

Hi David,

I completely agree, the best place for this information is in the docs. I
talked before about wanting to create a section in the docs called
"architecture" or "concepts", and explain there all the things discussed in
the mailing list. Being the pedantic creature that I am, I want it to be
well written, with diagram, and possibly video segments. I would appreciate
any type of help (the web site is hosted on github as well) and more than
willing to review whatever is written.

I can tell you that really on the top of my priority list is to explain
how sharding works, and how the gateway works in the mention section. I want
to get initial geo support "out of the way" and then focus on that.

Thanks for the suggestion for helping out!.

cheers,
-shay.banon

On Fri, Jul 30, 2010 at 2:34 AM, David Jensen djensen47@gmail.com wrote:

First, thank you. This response is great; can you (or I) post it
someplace else? I don't think the blog post has the same amount of
detail here.

I'm know I'm not testing at scale; it was a question.

The following is my opinion, you may not share it and that is fine. I
think that forums are great for discussion but they suck as an
information repository. They sometimes can help with a specific error
case. Discussion boards and mailing lists are not a replacement for
documentation but a great to augment. I understand the information
isn't being intentionally hidden but it's very hard to find. I'm sure
if YOU, the committer, search the mailing list, you'll find the
answers right away. But you have all of the knowledge of the system
and know exactly what to search for. It might take you 5 minutes but
it's me (maybe others) much longer. Furthermore, if I see a topic like
"keep losing documents" in the results, I might, erroneously skip
that. As usual, the problem isn't the technology, it's the
communication. If the "documentation" is buried in discussions, I'll
just live with it, no other choice.

Finally, if you would like, I can spend a little time writing the kind
of docs that I would like to be able to read on the subject. The
caveat is that I'll be asking a high volume of questions.

On Jul 29, 2:41 pm, Shay Banon shay.ba...@elasticsearch.com wrote:

First, if you upgrade to 0.9, you will need to reindex your data. And
upgrade to 0.9.

Second, lets get all this so called "magic" attitude out of the way. Are
you
suggesting that intentionally this information is being hidden? If no,
then
please stop. if yes, then it takes about 5 minutes to do a search on the
mailing list / docs / talks and find the answers you are looking for.

In any case, lets write it again... . When you create an index in
elasticsearch, the index is broken down into shards. A shard can have 0
or
more replicas. Shards and their replicas are allocated to different
nodes,
while elasticsearch makes sure to not allocate a shard and its replica to
the same node.

Within a single shard and its replicas, a primary is chosen. One of the
primary main purpose is to perform the scheduled snapshot operations from
the shard index to the gateway.

When a primary shard is first allocated to a node, it performs a recovery
from the gateway. When a replica of the same shard is allocated, it
recovers
its state from the primary. If another node is started, and the primary
needs to relocate to a different node to keep the number of shards
balanced
across nodes, then it will do a hot relocation, not another recovery from
the gateway.

If you want to know where things are allocate, there is a simple API, the
cluster state API, that gives you information about all the different
indices, shards, replicas, where they are allocated, what their state is
and
so on.

Back to the problems you described. First, the s3 gateway. In 0.8,
jclouds
was used. If you follow the mailing list, you will see that I issued
several
times warnings that it seems to be misbehaving, and I am going to replace
it
in 0.9. In 0.9, I went with the Amazon formal SDK, so hoping for better
things now... .

Second, there were several bugs in 0.8 that were fixed in 0.9. One of the
more major ones is the fact that when a node where a primary shard was
allocated was shutdown, than a replica will not become primary (as it
should), but a new primary will be allocated, and a full recovery from
the
gateway will happen. This is bad for two reasons. First, it means that
allocation is much slower, and the second, if the gateway misbehaves as
the
jclouds case, then the recovery might not work.

Last, regarding being tested at scale. What you are testing is not scale
(3
nodes). There are bugs, they are being fixed, but for example, the
problems
you were having are tested using simple 3 nodes automatic integration
tests.
As for scale, I do some testings on ec2, as well as other kind
elasticsearch
users who are running the system at scale and provide valuable
information
(and not magic) back to me and help fix any problems.

-shay.banon

On Thu, Jul 29, 2010 at 10:52 PM, David Jensen djense...@gmail.com
wrote:

Well, I actually had a complete cluster meltdown. I'm not sure when
the magic happened but all of my documents were restored from the
Gateway and it was fairly snappy too but my index is only 21GB.

The magic restore either happened automatically or it happened when I
invokedhttp://localhost:9200/indexname/_refresh. My guess would be
the former.

On Jul 29, 12:16 pm, Berkay Mollamustafaoglu mber...@gmail.com
wrote:

You can't restore from the gateway directly. Basically you'll need to
read
all docs from old index and write to new one.

Regards,
Berkay Mollamustafaoglu
mberkay on yahoo, google and skype

On Thu, Jul 29, 2010 at 3:15 PM, David Jensen djense...@gmail.com
wrote:

I guess the upside is that now that I've lost all 12M of my
documents
I can upgrade to 0.9.0.

How do I restore the index from the gateway?

On Jul 29, 12:01 pm, David Jensen djense...@gmail.com wrote:

I was a little worried about the "magic" that Elasticsearch
provides.
Here is an example why ...

After several days of indexing, I ran into an Exception reporting
too
many open files. I fixed the issue on the system and restarted
ONE of
my THREE nodes. Before the restart, I had 12.7M documents. After
the
restart, I have 5.1M documents. I'm also have an S3 Gateway. It
also
turns out (likely related to the file issue) that S3 snapshot
were
failing, but only this morning and I hadn't indexed anything new
for
about 14-20 hours.

Losing indexed documents like this is worrisome and this isn't
the
first time this has happened. When I first started playing with
Elasticsearch, I did a test where I loaded documents onto two
servers
and dropped one of the servers. I lost half my documents.

I'm curious, how is Elasticsearch being tested at scale?

Index data for a specific shard is always stored local to the node, and used
from there. The gateway is used for long term persistency, and basically,
the gateway service snapshots the loca index information to the gateway for
each primary shard. If you want to know where primary shards are allocated,
you can use the cluster state API.

If you completely delete the data on the gateway, then elasticsearch won't
have nothing to recover from (even though there is "local" information per
node) and should start out empty.

The reason why in 0.9 the local index information is kept post node shutdown
is to speed up the recovery from the gateway, by smartly not recovering
files that already exists on the local node.

-shay.banon

On Fri, Jul 30, 2010 at 4:52 AM, Franz Allan Valencia See <
franz.see@gmail.com> wrote:

@David:
Maybe although your elastichsearch was already up, it was still trying to
repopulate your gateway's indices? I experienced that before but for hdfs,
and it takes awhile before it can repopulate my gateway's indices.

@Shay
Pardon me for my stupid question, but what are the indices in
$ES_HOME/work/elasticsearch/nodes (and in 0.8
$ES_HOME/work/elasticsearch/indices) and the indices in the gateway? Which
one of these are the primary shard indices and which ones are the replicas?
..Or are these something else?

My assumption before was that all indices are stored in the gateway. But
when I clear out my gateway (i.e. in my hdfs, when I delete the directory I
assigned to ES and when i reformat the node), it gets repopulated (my guess
is from $ES_HOME/work/elasticsearch/nodes. because I have to delete those as
well clear out my entire indices).

Thanks,

Franz Allan Valencia See | Java Software Engineer
franz.see@gmail.com
LinkedIn: http://www.linkedin.com/in/franzsee
Twitter: http://www.twitter.com/franz_see

On Fri, Jul 30, 2010 at 5:41 AM, Shay Banon shay.banon@elasticsearch.comwrote:

First, if you upgrade to 0.9, you will need to reindex your data. And
upgrade to 0.9.

Second, lets get all this so called "magic" attitude out of the way. Are
you suggesting that intentionally this information is being hidden? If no,
then please stop. if yes, then it takes about 5 minutes to do a search on
the mailing list / docs / talks and find the answers you are looking for.

In any case, lets write it again... . When you create an index in
elasticsearch, the index is broken down into shards. A shard can have 0 or
more replicas. Shards and their replicas are allocated to different nodes,
while elasticsearch makes sure to not allocate a shard and its replica to
the same node.

Within a single shard and its replicas, a primary is chosen. One of the
primary main purpose is to perform the scheduled snapshot operations from
the shard index to the gateway.

When a primary shard is first allocated to a node, it performs a recovery
from the gateway. When a replica of the same shard is allocated, it recovers
its state from the primary. If another node is started, and the primary
needs to relocate to a different node to keep the number of shards balanced
across nodes, then it will do a hot relocation, not another recovery from
the gateway.

If you want to know where things are allocate, there is a simple API, the
cluster state API, that gives you information about all the different
indices, shards, replicas, where they are allocated, what their state is and
so on.

Back to the problems you described. First, the s3 gateway. In 0.8, jclouds
was used. If you follow the mailing list, you will see that I issued several
times warnings that it seems to be misbehaving, and I am going to replace it
in 0.9. In 0.9, I went with the Amazon formal SDK, so hoping for better
things now... .

Second, there were several bugs in 0.8 that were fixed in 0.9. One of the
more major ones is the fact that when a node where a primary shard was
allocated was shutdown, than a replica will not become primary (as it
should), but a new primary will be allocated, and a full recovery from the
gateway will happen. This is bad for two reasons. First, it means that
allocation is much slower, and the second, if the gateway misbehaves as the
jclouds case, then the recovery might not work.

Last, regarding being tested at scale. What you are testing is not scale
(3 nodes). There are bugs, they are being fixed, but for example, the
problems you were having are tested using simple 3 nodes automatic
integration tests. As for scale, I do some testings on ec2, as well as other
kind elasticsearch users who are running the system at scale and provide
valuable information (and not magic) back to me and help fix any problems.

-shay.banon

On Thu, Jul 29, 2010 at 10:52 PM, David Jensen djensen47@gmail.comwrote:

Well, I actually had a complete cluster meltdown. I'm not sure when
the magic happened but all of my documents were restored from the
Gateway and it was fairly snappy too but my index is only 21GB.

The magic restore either happened automatically or it happened when I
invoked http://localhost:9200/indexname/_refresh. My guess would be
the former.

On Jul 29, 12:16 pm, Berkay Mollamustafaoglu mber...@gmail.com
wrote:

You can't restore from the gateway directly. Basically you'll need to
read
all docs from old index and write to new one.

Regards,
Berkay Mollamustafaoglu
mberkay on yahoo, google and skype

On Thu, Jul 29, 2010 at 3:15 PM, David Jensen djense...@gmail.com
wrote:

I guess the upside is that now that I've lost all 12M of my documents
I can upgrade to 0.9.0.

How do I restore the index from the gateway?

On Jul 29, 12:01 pm, David Jensen djense...@gmail.com wrote:

I was a little worried about the "magic" that Elasticsearch
provides.
Here is an example why ...

After several days of indexing, I ran into an Exception reporting
too
many open files. I fixed the issue on the system and restarted ONE
of
my THREE nodes. Before the restart, I had 12.7M documents. After
the
restart, I have 5.1M documents. I'm also have an S3 Gateway. It
also
turns out (likely related to the file issue) that S3 snapshot were
failing, but only this morning and I hadn't indexed anything new
for
about 14-20 hours.

Losing indexed documents like this is worrisome and this isn't the
first time this has happened. When I first started playing with
Elasticsearch, I did a test where I loaded documents onto two
servers
and dropped one of the servers. I lost half my documents.

I'm curious, how is Elasticsearch being tested at scale?

Hello,

On Jul 30, 7:32 am, Shay Banon shay.ba...@elasticsearch.com wrote:

Index data for a specific shard is always stored local to the node, and used
from there. The gateway is used for long term persistency, and basically,
the gateway service snapshots the loca index information to the gateway for
each primary shard. If you want to know where primary shards are allocated,
you can use the cluster state API.

If you completely delete the data on the gateway, then elasticsearch won't
have nothing to recover from (even though there is "local" information per
node) and should start out empty.

Shay, should it be possible to tell ES to go and repopulate the
gateway from local storage? (a reverse recovery, so to speak)
Otherwise, in the above case, if something happens to the data on the
gw, what do you do?

Thanks,
Otis

Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch
Lucene ecosystem search :: http://search-lucene.com/

On Fri, Jul 30, 2010 at 4:52 AM, Franz Allan Valencia See <

franz....@gmail.com> wrote:

@David:
Maybe although your elastichsearch was already up, it was still trying to
repopulate your gateway's indices? I experienced that before but for hdfs,
and it takes awhile before it can repopulate my gateway's indices.

@Shay
Pardon me for my stupid question, but what are the indices in
$ES_HOME/work/elasticsearch/nodes (and in 0.8
$ES_HOME/work/elasticsearch/indices) and the indices in the gateway? Which
one of these are the primary shard indices and which ones are the replicas?
..Or are these something else?

My assumption before was that all indices are stored in the gateway. But
when I clear out my gateway (i.e. in my hdfs, when I delete the directory I
assigned to ES and when i reformat the node), it gets repopulated (my guess
is from $ES_HOME/work/elasticsearch/nodes. because I have to delete those as
well clear out my entire indices).

Thanks,

Franz Allan Valencia See | Java Software Engineer
franz....@gmail.com
LinkedIn:http://www.linkedin.com/in/franzsee
Twitter:http://www.twitter.com/franz_see

On Fri, Jul 30, 2010 at 5:41 AM, Shay Banon shay.ba...@elasticsearch.comwrote:

First, if you upgrade to 0.9, you will need to reindex your data. And
upgrade to 0.9.

Second, lets get all this so called "magic" attitude out of the way. Are
you suggesting that intentionally this information is being hidden? If no,
then please stop. if yes, then it takes about 5 minutes to do a search on
the mailing list / docs / talks and find the answers you are looking for.

In any case, lets write it again... . When you create an index in
elasticsearch, the index is broken down into shards. A shard can have 0 or
more replicas. Shards and their replicas are allocated to different nodes,
while elasticsearch makes sure to not allocate a shard and its replica to
the same node.

Within a single shard and its replicas, a primary is chosen. One of the
primary main purpose is to perform the scheduled snapshot operations from
the shard index to the gateway.

When a primary shard is first allocated to a node, it performs a recovery
from the gateway. When a replica of the same shard is allocated, it recovers
its state from the primary. If another node is started, and the primary
needs to relocate to a different node to keep the number of shards balanced
across nodes, then it will do a hot relocation, not another recovery from
the gateway.

If you want to know where things are allocate, there is a simple API, the
cluster state API, that gives you information about all the different
indices, shards, replicas, where they are allocated, what their state is and
so on.

Back to the problems you described. First, the s3 gateway. In 0.8, jclouds
was used. If you follow the mailing list, you will see that I issued several
times warnings that it seems to be misbehaving, and I am going to replace it
in 0.9. In 0.9, I went with the Amazon formal SDK, so hoping for better
things now... .

Second, there were several bugs in 0.8 that were fixed in 0.9. One of the
more major ones is the fact that when a node where a primary shard was
allocated was shutdown, than a replica will not become primary (as it
should), but a new primary will be allocated, and a full recovery from the
gateway will happen. This is bad for two reasons. First, it means that
allocation is much slower, and the second, if the gateway misbehaves as the
jclouds case, then the recovery might not work.

Last, regarding being tested at scale. What you are testing is not scale
(3 nodes). There are bugs, they are being fixed, but for example, the
problems you were having are tested using simple 3 nodes automatic
integration tests. As for scale, I do some testings on ec2, as well as other
kind elasticsearch users who are running the system at scale and provide
valuable information (and not magic) back to me and help fix any problems.

-shay.banon

On Thu, Jul 29, 2010 at 10:52 PM, David Jensen djense...@gmail.comwrote:

Well, I actually had a complete cluster meltdown. I'm not sure when
the magic happened but all of my documents were restored from the
Gateway and it was fairly snappy too but my index is only 21GB.

The magic restore either happened automatically or it happened when I
invokedhttp://localhost:9200/indexname/_refresh. My guess would be
the former.

On Jul 29, 12:16 pm, Berkay Mollamustafaoglu mber...@gmail.com
wrote:

You can't restore from the gateway directly. Basically you'll need to
read
all docs from old index and write to new one.

Regards,
Berkay Mollamustafaoglu
mberkay on yahoo, google and skype

On Thu, Jul 29, 2010 at 3:15 PM, David Jensen djense...@gmail.com
wrote:

I guess the upside is that now that I've lost all 12M of my documents
I can upgrade to 0.9.0.

How do I restore the index from the gateway?

On Jul 29, 12:01 pm, David Jensen djense...@gmail.com wrote:

I was a little worried about the "magic" that Elasticsearch
provides.
Here is an example why ...

After several days of indexing, I ran into an Exception reporting
too
many open files. I fixed the issue on the system and restarted ONE
of
my THREE nodes. Before the restart, I had 12.7M documents. After
the
restart, I have 5.1M documents. I'm also have an S3 Gateway. It
also
turns out (likely related to the file issue) that S3 snapshot were
failing, but only this morning and I hadn't indexed anything new
for
about 14-20 hours.

Losing indexed documents like this is worrisome and this isn't the
first time this has happened. When I first started playing with
Elasticsearch, I did a test where I loaded documents onto two
servers
and dropped one of the servers. I lost half my documents.

I'm curious, how is Elasticsearch being tested at scale?

No, you can't, the gateway is assumed to hold the master data. This can be
done manually by uploading what is needed it the (hopefully) rare event that
data gets lost from the gateway.

-shay.banon

On Fri, Jul 30, 2010 at 6:09 PM, Otis otis.gospodnetic@gmail.com wrote:

Hello,

On Jul 30, 7:32 am, Shay Banon shay.ba...@elasticsearch.com wrote:

Index data for a specific shard is always stored local to the node, and
used
from there. The gateway is used for long term persistency, and basically,
the gateway service snapshots the loca index information to the gateway
for
each primary shard. If you want to know where primary shards are
allocated,
you can use the cluster state API.

If you completely delete the data on the gateway, then elasticsearch
won't
have nothing to recover from (even though there is "local" information
per
node) and should start out empty.

Shay, should it be possible to tell ES to go and repopulate the
gateway from local storage? (a reverse recovery, so to speak)
Otherwise, in the above case, if something happens to the data on the
gw, what do you do?

Thanks,
Otis

Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch
Lucene ecosystem search :: http://search-lucene.com/

On Fri, Jul 30, 2010 at 4:52 AM, Franz Allan Valencia See <

franz....@gmail.com> wrote:

@David:
Maybe although your elastichsearch was already up, it was still trying
to
repopulate your gateway's indices? I experienced that before but for
hdfs,
and it takes awhile before it can repopulate my gateway's indices.

@Shay
Pardon me for my stupid question, but what are the indices in
$ES_HOME/work/elasticsearch/nodes (and in 0.8
$ES_HOME/work/elasticsearch/indices) and the indices in the gateway?
Which
one of these are the primary shard indices and which ones are the
replicas?
..Or are these something else?

My assumption before was that all indices are stored in the gateway.
But
when I clear out my gateway (i.e. in my hdfs, when I delete the
directory I
assigned to ES and when i reformat the node), it gets repopulated (my
guess
is from $ES_HOME/work/elasticsearch/nodes. because I have to delete
those as
well clear out my entire indices).

Thanks,

Franz Allan Valencia See | Java Software Engineer
franz....@gmail.com
LinkedIn:http://www.linkedin.com/in/franzsee
Twitter:http://www.twitter.com/franz_see

On Fri, Jul 30, 2010 at 5:41 AM, Shay Banon <
shay.ba...@elasticsearch.com>wrote:

First, if you upgrade to 0.9, you will need to reindex your data. And
upgrade to 0.9.

Second, lets get all this so called "magic" attitude out of the way.
Are
you suggesting that intentionally this information is being hidden? If
no,
then please stop. if yes, then it takes about 5 minutes to do a search
on
the mailing list / docs / talks and find the answers you are looking
for.

In any case, lets write it again... . When you create an index in
elasticsearch, the index is broken down into shards. A shard can have
0 or
more replicas. Shards and their replicas are allocated to different
nodes,
while elasticsearch makes sure to not allocate a shard and its replica
to
the same node.

Within a single shard and its replicas, a primary is chosen. One of
the
primary main purpose is to perform the scheduled snapshot operations
from
the shard index to the gateway.

When a primary shard is first allocated to a node, it performs a
recovery
from the gateway. When a replica of the same shard is allocated, it
recovers
its state from the primary. If another node is started, and the
primary
needs to relocate to a different node to keep the number of shards
balanced
across nodes, then it will do a hot relocation, not another recovery
from
the gateway.

If you want to know where things are allocate, there is a simple API,
the
cluster state API, that gives you information about all the different
indices, shards, replicas, where they are allocated, what their state
is and
so on.

Back to the problems you described. First, the s3 gateway. In 0.8,
jclouds
was used. If you follow the mailing list, you will see that I issued
several
times warnings that it seems to be misbehaving, and I am going to
replace it
in 0.9. In 0.9, I went with the Amazon formal SDK, so hoping for
better
things now... .

Second, there were several bugs in 0.8 that were fixed in 0.9. One of
the
more major ones is the fact that when a node where a primary shard was
allocated was shutdown, than a replica will not become primary (as it
should), but a new primary will be allocated, and a full recovery from
the
gateway will happen. This is bad for two reasons. First, it means that
allocation is much slower, and the second, if the gateway misbehaves
as the
jclouds case, then the recovery might not work.

Last, regarding being tested at scale. What you are testing is not
scale
(3 nodes). There are bugs, they are being fixed, but for example, the
problems you were having are tested using simple 3 nodes automatic
integration tests. As for scale, I do some testings on ec2, as well as
other
kind elasticsearch users who are running the system at scale and
provide
valuable information (and not magic) back to me and help fix any
problems.

-shay.banon

On Thu, Jul 29, 2010 at 10:52 PM, David Jensen <djense...@gmail.com
wrote:

Well, I actually had a complete cluster meltdown. I'm not sure when
the magic happened but all of my documents were restored from the
Gateway and it was fairly snappy too but my index is only 21GB.

The magic restore either happened automatically or it happened when I
invokedhttp://localhost:9200/indexname/_refresh. My guess would be
the former.

On Jul 29, 12:16 pm, Berkay Mollamustafaoglu mber...@gmail.com
wrote:

You can't restore from the gateway directly. Basically you'll need
to
read
all docs from old index and write to new one.

Regards,
Berkay Mollamustafaoglu
mberkay on yahoo, google and skype

On Thu, Jul 29, 2010 at 3:15 PM, David Jensen <djense...@gmail.com

wrote:

I guess the upside is that now that I've lost all 12M of my
documents
I can upgrade to 0.9.0.

How do I restore the index from the gateway?

On Jul 29, 12:01 pm, David Jensen djense...@gmail.com wrote:

I was a little worried about the "magic" that Elasticsearch
provides.
Here is an example why ...

After several days of indexing, I ran into an Exception
reporting
too
many open files. I fixed the issue on the system and restarted
ONE
of
my THREE nodes. Before the restart, I had 12.7M documents.
After
the
restart, I have 5.1M documents. I'm also have an S3 Gateway. It
also
turns out (likely related to the file issue) that S3 snapshot
were
failing, but only this morning and I hadn't indexed anything
new
for
about 14-20 hours.

Losing indexed documents like this is worrisome and this isn't
the
first time this has happened. When I first started playing with
Elasticsearch, I did a test where I loaded documents onto two
servers
and dropped one of the servers. I lost half my documents.

I'm curious, how is Elasticsearch being tested at scale?

Pardon, but I'm experiencing a different thing. I am using hdfs, and I
delete the path in the hdfs that ES is using, Then when I start up ES, ES
tries to repopulate hdfs. It's only when I delete the indices in the ES
nodes that I completely lose all my data.

On Sat, Jul 31, 2010 at 12:01 AM, Shay Banon
shay.banon@elasticsearch.comwrote:

No, you can't, the gateway is assumed to hold the master data. This can be
done manually by uploading what is needed it the (hopefully) rare event that
data gets lost from the gateway.

-shay.banon

On Fri, Jul 30, 2010 at 6:09 PM, Otis otis.gospodnetic@gmail.com wrote:

Hello,

On Jul 30, 7:32 am, Shay Banon shay.ba...@elasticsearch.com wrote:

Index data for a specific shard is always stored local to the node, and
used
from there. The gateway is used for long term persistency, and
basically,
the gateway service snapshots the loca index information to the gateway
for
each primary shard. If you want to know where primary shards are
allocated,
you can use the cluster state API.

If you completely delete the data on the gateway, then elasticsearch
won't
have nothing to recover from (even though there is "local" information
per
node) and should start out empty.

Shay, should it be possible to tell ES to go and repopulate the
gateway from local storage? (a reverse recovery, so to speak)
Otherwise, in the above case, if something happens to the data on the
gw, what do you do?

Thanks,
Otis

Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch
Lucene ecosystem search :: http://search-lucene.com/

On Fri, Jul 30, 2010 at 4:52 AM, Franz Allan Valencia See <

franz....@gmail.com> wrote:

@David:
Maybe although your elastichsearch was already up, it was still trying
to
repopulate your gateway's indices? I experienced that before but for
hdfs,
and it takes awhile before it can repopulate my gateway's indices.

@Shay
Pardon me for my stupid question, but what are the indices in
$ES_HOME/work/elasticsearch/nodes (and in 0.8
$ES_HOME/work/elasticsearch/indices) and the indices in the gateway?
Which
one of these are the primary shard indices and which ones are the
replicas?
..Or are these something else?

My assumption before was that all indices are stored in the gateway.
But
when I clear out my gateway (i.e. in my hdfs, when I delete the
directory I
assigned to ES and when i reformat the node), it gets repopulated (my
guess
is from $ES_HOME/work/elasticsearch/nodes. because I have to delete
those as
well clear out my entire indices).

Thanks,

Franz Allan Valencia See | Java Software Engineer
franz....@gmail.com
LinkedIn:http://www.linkedin.com/in/franzsee
Twitter:http://www.twitter.com/franz_see

On Fri, Jul 30, 2010 at 5:41 AM, Shay Banon <
shay.ba...@elasticsearch.com>wrote:

First, if you upgrade to 0.9, you will need to reindex your data. And
upgrade to 0.9.

Second, lets get all this so called "magic" attitude out of the way.
Are
you suggesting that intentionally this information is being hidden?
If no,
then please stop. if yes, then it takes about 5 minutes to do a
search on
the mailing list / docs / talks and find the answers you are looking
for.

In any case, lets write it again... . When you create an index in
elasticsearch, the index is broken down into shards. A shard can have
0 or
more replicas. Shards and their replicas are allocated to different
nodes,
while elasticsearch makes sure to not allocate a shard and its
replica to
the same node.

Within a single shard and its replicas, a primary is chosen. One of
the
primary main purpose is to perform the scheduled snapshot operations
from
the shard index to the gateway.

When a primary shard is first allocated to a node, it performs a
recovery
from the gateway. When a replica of the same shard is allocated, it
recovers
its state from the primary. If another node is started, and the
primary
needs to relocate to a different node to keep the number of shards
balanced
across nodes, then it will do a hot relocation, not another recovery
from
the gateway.

If you want to know where things are allocate, there is a simple API,
the
cluster state API, that gives you information about all the different
indices, shards, replicas, where they are allocated, what their state
is and
so on.

Back to the problems you described. First, the s3 gateway. In 0.8,
jclouds
was used. If you follow the mailing list, you will see that I issued
several
times warnings that it seems to be misbehaving, and I am going to
replace it
in 0.9. In 0.9, I went with the Amazon formal SDK, so hoping for
better
things now... .

Second, there were several bugs in 0.8 that were fixed in 0.9. One of
the
more major ones is the fact that when a node where a primary shard
was
allocated was shutdown, than a replica will not become primary (as it
should), but a new primary will be allocated, and a full recovery
from the
gateway will happen. This is bad for two reasons. First, it means
that
allocation is much slower, and the second, if the gateway misbehaves
as the
jclouds case, then the recovery might not work.

Last, regarding being tested at scale. What you are testing is not
scale
(3 nodes). There are bugs, they are being fixed, but for example, the
problems you were having are tested using simple 3 nodes automatic
integration tests. As for scale, I do some testings on ec2, as well
as other
kind elasticsearch users who are running the system at scale and
provide
valuable information (and not magic) back to me and help fix any
problems.

-shay.banon

On Thu, Jul 29, 2010 at 10:52 PM, David Jensen <djense...@gmail.com
wrote:

Well, I actually had a complete cluster meltdown. I'm not sure when
the magic happened but all of my documents were restored from the
Gateway and it was fairly snappy too but my index is only 21GB.

The magic restore either happened automatically or it happened when
I
invokedhttp://localhost:9200/indexname/_refresh. My guess would be
the former.

On Jul 29, 12:16 pm, Berkay Mollamustafaoglu mber...@gmail.com
wrote:

You can't restore from the gateway directly. Basically you'll need
to
read
all docs from old index and write to new one.

Regards,
Berkay Mollamustafaoglu
mberkay on yahoo, google and skype

On Thu, Jul 29, 2010 at 3:15 PM, David Jensen <
djense...@gmail.com>
wrote:

I guess the upside is that now that I've lost all 12M of my
documents
I can upgrade to 0.9.0.

How do I restore the index from the gateway?

On Jul 29, 12:01 pm, David Jensen djense...@gmail.com wrote:

I was a little worried about the "magic" that Elasticsearch
provides.
Here is an example why ...

After several days of indexing, I ran into an Exception
reporting
too
many open files. I fixed the issue on the system and restarted
ONE
of
my THREE nodes. Before the restart, I had 12.7M documents.
After
the
restart, I have 5.1M documents. I'm also have an S3 Gateway.
It
also
turns out (likely related to the file issue) that S3 snapshot
were
failing, but only this morning and I hadn't indexed anything
new
for
about 14-20 hours.

Losing indexed documents like this is worrisome and this isn't
the
first time this has happened. When I first started playing
with
Elasticsearch, I did a test where I loaded documents onto two
servers
and dropped one of the servers. I lost half my documents.

I'm curious, how is Elasticsearch being tested at scale?

--
Franz Allan Valencia See | Java Software Engineer
franz.see@gmail.com
LinkedIn: http://www.linkedin.com/in/franzsee
Twitter: http://www.twitter.com/franz_see

Do you delete the data when all the cluster is down, or do you still have
nodes running? If you still have nodes running, then they will keep on
snapshotting the data to the gateway. If you delete the gateway data when
all the nodes are down, then, when you start it back up, it won't have data
to recover from.

-shay.banon

On Sat, Jul 31, 2010 at 6:33 PM, Franz Allan Valencia See <
franz.see@gmail.com> wrote:

Pardon, but I'm experiencing a different thing. I am using hdfs, and I
delete the path in the hdfs that ES is using, Then when I start up ES, ES
tries to repopulate hdfs. It's only when I delete the indices in the ES
nodes that I completely lose all my data.

On Sat, Jul 31, 2010 at 12:01 AM, Shay Banon <shay.banon@elasticsearch.com

wrote:

No, you can't, the gateway is assumed to hold the master data. This can be
done manually by uploading what is needed it the (hopefully) rare event that
data gets lost from the gateway.

-shay.banon

On Fri, Jul 30, 2010 at 6:09 PM, Otis otis.gospodnetic@gmail.com wrote:

Hello,

On Jul 30, 7:32 am, Shay Banon shay.ba...@elasticsearch.com wrote:

Index data for a specific shard is always stored local to the node, and
used
from there. The gateway is used for long term persistency, and
basically,
the gateway service snapshots the loca index information to the gateway
for
each primary shard. If you want to know where primary shards are
allocated,
you can use the cluster state API.

If you completely delete the data on the gateway, then elasticsearch
won't
have nothing to recover from (even though there is "local" information
per
node) and should start out empty.

Shay, should it be possible to tell ES to go and repopulate the
gateway from local storage? (a reverse recovery, so to speak)
Otherwise, in the above case, if something happens to the data on the
gw, what do you do?

Thanks,
Otis

Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch
Lucene ecosystem search :: http://search-lucene.com/

On Fri, Jul 30, 2010 at 4:52 AM, Franz Allan Valencia See <

franz....@gmail.com> wrote:

@David:
Maybe although your elastichsearch was already up, it was still
trying to
repopulate your gateway's indices? I experienced that before but for
hdfs,
and it takes awhile before it can repopulate my gateway's indices.

@Shay
Pardon me for my stupid question, but what are the indices in
$ES_HOME/work/elasticsearch/nodes (and in 0.8
$ES_HOME/work/elasticsearch/indices) and the indices in the gateway?
Which
one of these are the primary shard indices and which ones are the
replicas?
..Or are these something else?

My assumption before was that all indices are stored in the gateway.
But
when I clear out my gateway (i.e. in my hdfs, when I delete the
directory I
assigned to ES and when i reformat the node), it gets repopulated (my
guess
is from $ES_HOME/work/elasticsearch/nodes. because I have to delete
those as
well clear out my entire indices).

Thanks,

Franz Allan Valencia See | Java Software Engineer
franz....@gmail.com
LinkedIn:http://www.linkedin.com/in/franzsee
Twitter:http://www.twitter.com/franz_see

On Fri, Jul 30, 2010 at 5:41 AM, Shay Banon <
shay.ba...@elasticsearch.com>wrote:

First, if you upgrade to 0.9, you will need to reindex your data.
And
upgrade to 0.9.

Second, lets get all this so called "magic" attitude out of the way.
Are
you suggesting that intentionally this information is being hidden?
If no,
then please stop. if yes, then it takes about 5 minutes to do a
search on
the mailing list / docs / talks and find the answers you are looking
for.

In any case, lets write it again... . When you create an index in
elasticsearch, the index is broken down into shards. A shard can
have 0 or
more replicas. Shards and their replicas are allocated to different
nodes,
while elasticsearch makes sure to not allocate a shard and its
replica to
the same node.

Within a single shard and its replicas, a primary is chosen. One of
the
primary main purpose is to perform the scheduled snapshot operations
from
the shard index to the gateway.

When a primary shard is first allocated to a node, it performs a
recovery
from the gateway. When a replica of the same shard is allocated, it
recovers
its state from the primary. If another node is started, and the
primary
needs to relocate to a different node to keep the number of shards
balanced
across nodes, then it will do a hot relocation, not another recovery
from
the gateway.

If you want to know where things are allocate, there is a simple
API, the
cluster state API, that gives you information about all the
different
indices, shards, replicas, where they are allocated, what their
state is and
so on.

Back to the problems you described. First, the s3 gateway. In 0.8,
jclouds
was used. If you follow the mailing list, you will see that I issued
several
times warnings that it seems to be misbehaving, and I am going to
replace it
in 0.9. In 0.9, I went with the Amazon formal SDK, so hoping for
better
things now... .

Second, there were several bugs in 0.8 that were fixed in 0.9. One
of the
more major ones is the fact that when a node where a primary shard
was
allocated was shutdown, than a replica will not become primary (as
it
should), but a new primary will be allocated, and a full recovery
from the
gateway will happen. This is bad for two reasons. First, it means
that
allocation is much slower, and the second, if the gateway misbehaves
as the
jclouds case, then the recovery might not work.

Last, regarding being tested at scale. What you are testing is not
scale
(3 nodes). There are bugs, they are being fixed, but for example,
the
problems you were having are tested using simple 3 nodes automatic
integration tests. As for scale, I do some testings on ec2, as well
as other
kind elasticsearch users who are running the system at scale and
provide
valuable information (and not magic) back to me and help fix any
problems.

-shay.banon

On Thu, Jul 29, 2010 at 10:52 PM, David Jensen <djense...@gmail.com
wrote:

Well, I actually had a complete cluster meltdown. I'm not sure when
the magic happened but all of my documents were restored from the
Gateway and it was fairly snappy too but my index is only 21GB.

The magic restore either happened automatically or it happened when
I
invokedhttp://localhost:9200/indexname/_refresh. My guess would be
the former.

On Jul 29, 12:16 pm, Berkay Mollamustafaoglu mber...@gmail.com
wrote:

You can't restore from the gateway directly. Basically you'll
need to
read
all docs from old index and write to new one.

Regards,
Berkay Mollamustafaoglu
mberkay on yahoo, google and skype

On Thu, Jul 29, 2010 at 3:15 PM, David Jensen <
djense...@gmail.com>
wrote:

I guess the upside is that now that I've lost all 12M of my
documents
I can upgrade to 0.9.0.

How do I restore the index from the gateway?

On Jul 29, 12:01 pm, David Jensen djense...@gmail.com wrote:

I was a little worried about the "magic" that Elasticsearch
provides.
Here is an example why ...

After several days of indexing, I ran into an Exception
reporting
too
many open files. I fixed the issue on the system and
restarted ONE
of
my THREE nodes. Before the restart, I had 12.7M documents.
After
the
restart, I have 5.1M documents. I'm also have an S3 Gateway.
It
also
turns out (likely related to the file issue) that S3 snapshot
were
failing, but only this morning and I hadn't indexed anything
new
for
about 14-20 hours.

Losing indexed documents like this is worrisome and this
isn't the
first time this has happened. When I first started playing
with
Elasticsearch, I did a test where I loaded documents onto two
servers
and dropped one of the servers. I lost half my documents.

I'm curious, how is Elasticsearch being tested at scale?

--
Franz Allan Valencia See | Java Software Engineer
franz.see@gmail.com

LinkedIn: http://www.linkedin.com/in/franzsee
Twitter: http://www.twitter.com/franz_see

Ahh..I now know what I was doing wrong. I was clearing out the
hadoop.tmp.dir and not my actual ES gateway path. After clearing out the ES
gateway path, I can now clear out my indices. Thanks :slight_smile:

Another question though: when my ES cluster starts up, it will retrieve
indices from the gateway right? In my current setup, this takes some time to
complete. Given that, how can I keep track of my ES cluster's progress? Like
is it at 10% or at 50% or 90%? ...or can I find out whether it is still
retrieving from the gateway or whether it has finished?

Thanks,

Franz Allan Valencia See | Java Software Engineer
franz.see@gmail.com
LinkedIn: http://www.linkedin.com/in/franzsee
Twitter: http://www.twitter.com/franz_see

On Sun, Aug 1, 2010 at 1:31 AM, Shay Banon shay.banon@elasticsearch.comwrote:

Do you delete the data when all the cluster is down, or do you still have
nodes running? If you still have nodes running, then they will keep on
snapshotting the data to the gateway. If you delete the gateway data when
all the nodes are down, then, when you start it back up, it won't have data
to recover from.

-shay.banon

On Sat, Jul 31, 2010 at 6:33 PM, Franz Allan Valencia See <
franz.see@gmail.com> wrote:

Pardon, but I'm experiencing a different thing. I am using hdfs, and I
delete the path in the hdfs that ES is using, Then when I start up ES, ES
tries to repopulate hdfs. It's only when I delete the indices in the ES
nodes that I completely lose all my data.

On Sat, Jul 31, 2010 at 12:01 AM, Shay Banon <
shay.banon@elasticsearch.com> wrote:

No, you can't, the gateway is assumed to hold the master data. This can
be done manually by uploading what is needed it the (hopefully) rare event
that data gets lost from the gateway.

-shay.banon

On Fri, Jul 30, 2010 at 6:09 PM, Otis otis.gospodnetic@gmail.comwrote:

Hello,

On Jul 30, 7:32 am, Shay Banon shay.ba...@elasticsearch.com wrote:

Index data for a specific shard is always stored local to the node,
and used
from there. The gateway is used for long term persistency, and
basically,
the gateway service snapshots the loca index information to the
gateway for
each primary shard. If you want to know where primary shards are
allocated,
you can use the cluster state API.

If you completely delete the data on the gateway, then elasticsearch
won't
have nothing to recover from (even though there is "local" information
per
node) and should start out empty.

Shay, should it be possible to tell ES to go and repopulate the
gateway from local storage? (a reverse recovery, so to speak)
Otherwise, in the above case, if something happens to the data on the
gw, what do you do?

Thanks,
Otis

Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch
Lucene ecosystem search :: http://search-lucene.com/

On Fri, Jul 30, 2010 at 4:52 AM, Franz Allan Valencia See <

franz....@gmail.com> wrote:

@David:
Maybe although your elastichsearch was already up, it was still
trying to
repopulate your gateway's indices? I experienced that before but for
hdfs,
and it takes awhile before it can repopulate my gateway's indices.

@Shay
Pardon me for my stupid question, but what are the indices in
$ES_HOME/work/elasticsearch/nodes (and in 0.8
$ES_HOME/work/elasticsearch/indices) and the indices in the gateway?
Which
one of these are the primary shard indices and which ones are the
replicas?
..Or are these something else?

My assumption before was that all indices are stored in the gateway.
But
when I clear out my gateway (i.e. in my hdfs, when I delete the
directory I
assigned to ES and when i reformat the node), it gets repopulated
(my guess
is from $ES_HOME/work/elasticsearch/nodes. because I have to delete
those as
well clear out my entire indices).

Thanks,

Franz Allan Valencia See | Java Software Engineer
franz....@gmail.com
LinkedIn:http://www.linkedin.com/in/franzsee
Twitter:http://www.twitter.com/franz_see

On Fri, Jul 30, 2010 at 5:41 AM, Shay Banon <
shay.ba...@elasticsearch.com>wrote:

First, if you upgrade to 0.9, you will need to reindex your data.
And
upgrade to 0.9.

Second, lets get all this so called "magic" attitude out of the
way. Are
you suggesting that intentionally this information is being hidden?
If no,
then please stop. if yes, then it takes about 5 minutes to do a
search on
the mailing list / docs / talks and find the answers you are
looking for.

In any case, lets write it again... . When you create an index in
elasticsearch, the index is broken down into shards. A shard can
have 0 or
more replicas. Shards and their replicas are allocated to different
nodes,
while elasticsearch makes sure to not allocate a shard and its
replica to
the same node.

Within a single shard and its replicas, a primary is chosen. One of
the
primary main purpose is to perform the scheduled snapshot
operations from
the shard index to the gateway.

When a primary shard is first allocated to a node, it performs a
recovery
from the gateway. When a replica of the same shard is allocated, it
recovers
its state from the primary. If another node is started, and the
primary
needs to relocate to a different node to keep the number of shards
balanced
across nodes, then it will do a hot relocation, not another
recovery from
the gateway.

If you want to know where things are allocate, there is a simple
API, the
cluster state API, that gives you information about all the
different
indices, shards, replicas, where they are allocated, what their
state is and
so on.

Back to the problems you described. First, the s3 gateway. In 0.8,
jclouds
was used. If you follow the mailing list, you will see that I
issued several
times warnings that it seems to be misbehaving, and I am going to
replace it
in 0.9. In 0.9, I went with the Amazon formal SDK, so hoping for
better
things now... .

Second, there were several bugs in 0.8 that were fixed in 0.9. One
of the
more major ones is the fact that when a node where a primary shard
was
allocated was shutdown, than a replica will not become primary (as
it
should), but a new primary will be allocated, and a full recovery
from the
gateway will happen. This is bad for two reasons. First, it means
that
allocation is much slower, and the second, if the gateway
misbehaves as the
jclouds case, then the recovery might not work.

Last, regarding being tested at scale. What you are testing is not
scale
(3 nodes). There are bugs, they are being fixed, but for example,
the
problems you were having are tested using simple 3 nodes automatic
integration tests. As for scale, I do some testings on ec2, as well
as other
kind elasticsearch users who are running the system at scale and
provide
valuable information (and not magic) back to me and help fix any
problems.

-shay.banon

On Thu, Jul 29, 2010 at 10:52 PM, David Jensen <
djense...@gmail.com>wrote:

Well, I actually had a complete cluster meltdown. I'm not sure
when
the magic happened but all of my documents were restored from the
Gateway and it was fairly snappy too but my index is only 21GB.

The magic restore either happened automatically or it happened
when I
invokedhttp://localhost:9200/indexname/_refresh. My guess would be
the former.

On Jul 29, 12:16 pm, Berkay Mollamustafaoglu mber...@gmail.com
wrote:

You can't restore from the gateway directly. Basically you'll
need to
read
all docs from old index and write to new one.

Regards,
Berkay Mollamustafaoglu
mberkay on yahoo, google and skype

On Thu, Jul 29, 2010 at 3:15 PM, David Jensen <
djense...@gmail.com>
wrote:

I guess the upside is that now that I've lost all 12M of my
documents
I can upgrade to 0.9.0.

How do I restore the index from the gateway?

On Jul 29, 12:01 pm, David Jensen djense...@gmail.com
wrote:

I was a little worried about the "magic" that Elasticsearch
provides.
Here is an example why ...

After several days of indexing, I ran into an Exception
reporting
too
many open files. I fixed the issue on the system and
restarted ONE
of
my THREE nodes. Before the restart, I had 12.7M documents.
After
the
restart, I have 5.1M documents. I'm also have an S3 Gateway.
It
also
turns out (likely related to the file issue) that S3
snapshot were
failing, but only this morning and I hadn't indexed anything
new
for
about 14-20 hours.

Losing indexed documents like this is worrisome and this
isn't the
first time this has happened. When I first started playing
with
Elasticsearch, I did a test where I loaded documents onto
two
servers
and dropped one of the servers. I lost half my documents.

I'm curious, how is Elasticsearch being tested at scale?

--
Franz Allan Valencia See | Java Software Engineer
franz.see@gmail.com

LinkedIn: http://www.linkedin.com/in/franzsee
Twitter: http://www.twitter.com/franz_see

You can tell if it is done or not based on the cluster state, a specific
progress indication is not provided (yet). Note that with 0.9, it will try
to reuse what there is on the local fs, but you need to make sure you set
recover_after_nodes setting.

-shay.banon

On Wed, Aug 4, 2010 at 6:21 PM, Franz Allan Valencia See <
franz.see@gmail.com> wrote:

Ahh..I now know what I was doing wrong. I was clearing out the
hadoop.tmp.dir and not my actual ES gateway path. After clearing out the ES
gateway path, I can now clear out my indices. Thanks :slight_smile:

Another question though: when my ES cluster starts up, it will retrieve
indices from the gateway right? In my current setup, this takes some time to
complete. Given that, how can I keep track of my ES cluster's progress? Like
is it at 10% or at 50% or 90%? ...or can I find out whether it is still
retrieving from the gateway or whether it has finished?

Thanks,

Franz Allan Valencia See | Java Software Engineer
franz.see@gmail.com
LinkedIn: http://www.linkedin.com/in/franzsee
Twitter: http://www.twitter.com/franz_see

On Sun, Aug 1, 2010 at 1:31 AM, Shay Banon shay.banon@elasticsearch.comwrote:

Do you delete the data when all the cluster is down, or do you still have
nodes running? If you still have nodes running, then they will keep on
snapshotting the data to the gateway. If you delete the gateway data when
all the nodes are down, then, when you start it back up, it won't have data
to recover from.

-shay.banon

On Sat, Jul 31, 2010 at 6:33 PM, Franz Allan Valencia See <
franz.see@gmail.com> wrote:

Pardon, but I'm experiencing a different thing. I am using hdfs, and I
delete the path in the hdfs that ES is using, Then when I start up ES, ES
tries to repopulate hdfs. It's only when I delete the indices in the ES
nodes that I completely lose all my data.

On Sat, Jul 31, 2010 at 12:01 AM, Shay Banon <
shay.banon@elasticsearch.com> wrote:

No, you can't, the gateway is assumed to hold the master data. This can
be done manually by uploading what is needed it the (hopefully) rare event
that data gets lost from the gateway.

-shay.banon

On Fri, Jul 30, 2010 at 6:09 PM, Otis otis.gospodnetic@gmail.comwrote:

Hello,

On Jul 30, 7:32 am, Shay Banon shay.ba...@elasticsearch.com wrote:

Index data for a specific shard is always stored local to the node,
and used
from there. The gateway is used for long term persistency, and
basically,
the gateway service snapshots the loca index information to the
gateway for
each primary shard. If you want to know where primary shards are
allocated,
you can use the cluster state API.

If you completely delete the data on the gateway, then elasticsearch
won't
have nothing to recover from (even though there is "local"
information per
node) and should start out empty.

Shay, should it be possible to tell ES to go and repopulate the
gateway from local storage? (a reverse recovery, so to speak)
Otherwise, in the above case, if something happens to the data on the
gw, what do you do?

Thanks,
Otis

Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch
Lucene ecosystem search :: http://search-lucene.com/

On Fri, Jul 30, 2010 at 4:52 AM, Franz Allan Valencia See <

franz....@gmail.com> wrote:

@David:
Maybe although your elastichsearch was already up, it was still
trying to
repopulate your gateway's indices? I experienced that before but
for hdfs,
and it takes awhile before it can repopulate my gateway's indices.

@Shay
Pardon me for my stupid question, but what are the indices in
$ES_HOME/work/elasticsearch/nodes (and in 0.8
$ES_HOME/work/elasticsearch/indices) and the indices in the
gateway? Which
one of these are the primary shard indices and which ones are the
replicas?
..Or are these something else?

My assumption before was that all indices are stored in the
gateway. But
when I clear out my gateway (i.e. in my hdfs, when I delete the
directory I
assigned to ES and when i reformat the node), it gets repopulated
(my guess
is from $ES_HOME/work/elasticsearch/nodes. because I have to delete
those as
well clear out my entire indices).

Thanks,

Franz Allan Valencia See | Java Software Engineer
franz....@gmail.com
LinkedIn:http://www.linkedin.com/in/franzsee
Twitter:http://www.twitter.com/franz_see

On Fri, Jul 30, 2010 at 5:41 AM, Shay Banon <
shay.ba...@elasticsearch.com>wrote:

First, if you upgrade to 0.9, you will need to reindex your data.
And
upgrade to 0.9.

Second, lets get all this so called "magic" attitude out of the
way. Are
you suggesting that intentionally this information is being
hidden? If no,
then please stop. if yes, then it takes about 5 minutes to do a
search on
the mailing list / docs / talks and find the answers you are
looking for.

In any case, lets write it again... . When you create an index in
elasticsearch, the index is broken down into shards. A shard can
have 0 or
more replicas. Shards and their replicas are allocated to
different nodes,
while elasticsearch makes sure to not allocate a shard and its
replica to
the same node.

Within a single shard and its replicas, a primary is chosen. One
of the
primary main purpose is to perform the scheduled snapshot
operations from
the shard index to the gateway.

When a primary shard is first allocated to a node, it performs a
recovery
from the gateway. When a replica of the same shard is allocated,
it recovers
its state from the primary. If another node is started, and the
primary
needs to relocate to a different node to keep the number of shards
balanced
across nodes, then it will do a hot relocation, not another
recovery from
the gateway.

If you want to know where things are allocate, there is a simple
API, the
cluster state API, that gives you information about all the
different
indices, shards, replicas, where they are allocated, what their
state is and
so on.

Back to the problems you described. First, the s3 gateway. In 0.8,
jclouds
was used. If you follow the mailing list, you will see that I
issued several
times warnings that it seems to be misbehaving, and I am going to
replace it
in 0.9. In 0.9, I went with the Amazon formal SDK, so hoping for
better
things now... .

Second, there were several bugs in 0.8 that were fixed in 0.9. One
of the
more major ones is the fact that when a node where a primary shard
was
allocated was shutdown, than a replica will not become primary (as
it
should), but a new primary will be allocated, and a full recovery
from the
gateway will happen. This is bad for two reasons. First, it means
that
allocation is much slower, and the second, if the gateway
misbehaves as the
jclouds case, then the recovery might not work.

Last, regarding being tested at scale. What you are testing is not
scale
(3 nodes). There are bugs, they are being fixed, but for example,
the
problems you were having are tested using simple 3 nodes automatic
integration tests. As for scale, I do some testings on ec2, as
well as other
kind elasticsearch users who are running the system at scale and
provide
valuable information (and not magic) back to me and help fix any
problems.

-shay.banon

On Thu, Jul 29, 2010 at 10:52 PM, David Jensen <
djense...@gmail.com>wrote:

Well, I actually had a complete cluster meltdown. I'm not sure
when
the magic happened but all of my documents were restored from the
Gateway and it was fairly snappy too but my index is only 21GB.

The magic restore either happened automatically or it happened
when I
invokedhttp://localhost:9200/indexname/_refresh. My guess would
be
the former.

On Jul 29, 12:16 pm, Berkay Mollamustafaoglu mber...@gmail.com
wrote:

You can't restore from the gateway directly. Basically you'll
need to
read
all docs from old index and write to new one.

Regards,
Berkay Mollamustafaoglu
mberkay on yahoo, google and skype

On Thu, Jul 29, 2010 at 3:15 PM, David Jensen <
djense...@gmail.com>
wrote:

I guess the upside is that now that I've lost all 12M of my
documents
I can upgrade to 0.9.0.

How do I restore the index from the gateway?

On Jul 29, 12:01 pm, David Jensen djense...@gmail.com
wrote:

I was a little worried about the "magic" that Elasticsearch
provides.
Here is an example why ...

After several days of indexing, I ran into an Exception
reporting
too
many open files. I fixed the issue on the system and
restarted ONE
of
my THREE nodes. Before the restart, I had 12.7M documents.
After
the
restart, I have 5.1M documents. I'm also have an S3
Gateway. It
also
turns out (likely related to the file issue) that S3
snapshot were
failing, but only this morning and I hadn't indexed
anything new
for
about 14-20 hours.

Losing indexed documents like this is worrisome and this
isn't the
first time this has happened. When I first started playing
with
Elasticsearch, I did a test where I loaded documents onto
two
servers
and dropped one of the servers. I lost half my documents.

I'm curious, how is Elasticsearch being tested at scale?

--
Franz Allan Valencia See | Java Software Engineer
franz.see@gmail.com

LinkedIn: http://www.linkedin.com/in/franzsee
Twitter: http://www.twitter.com/franz_see

The cluster state just indicates the current state right? My problem with
those states is that I am not sure whether it's still trying to fix it's
health or whether it has given up.

Thanks,

Franz Allan Valencia See | Java Software Engineer
franz.see@gmail.com
LinkedIn: http://www.linkedin.com/in/franzsee
Twitter: http://www.twitter.com/franz_see

On Thu, Aug 5, 2010 at 1:26 AM, Shay Banon shay.banon@elasticsearch.comwrote:

You can tell if it is done or not based on the cluster state, a specific
progress indication is not provided (yet). Note that with 0.9, it will try
to reuse what there is on the local fs, but you need to make sure you set
recover_after_nodes setting.

-shay.banon

On Wed, Aug 4, 2010 at 6:21 PM, Franz Allan Valencia See <
franz.see@gmail.com> wrote:

Ahh..I now know what I was doing wrong. I was clearing out the
hadoop.tmp.dir and not my actual ES gateway path. After clearing out the ES
gateway path, I can now clear out my indices. Thanks :slight_smile:

Another question though: when my ES cluster starts up, it will retrieve
indices from the gateway right? In my current setup, this takes some time to
complete. Given that, how can I keep track of my ES cluster's progress? Like
is it at 10% or at 50% or 90%? ...or can I find out whether it is still
retrieving from the gateway or whether it has finished?

Thanks,

Franz Allan Valencia See | Java Software Engineer
franz.see@gmail.com
LinkedIn: http://www.linkedin.com/in/franzsee
Twitter: http://www.twitter.com/franz_see

On Sun, Aug 1, 2010 at 1:31 AM, Shay Banon shay.banon@elasticsearch.comwrote:

Do you delete the data when all the cluster is down, or do you still have
nodes running? If you still have nodes running, then they will keep on
snapshotting the data to the gateway. If you delete the gateway data when
all the nodes are down, then, when you start it back up, it won't have data
to recover from.

-shay.banon

On Sat, Jul 31, 2010 at 6:33 PM, Franz Allan Valencia See <
franz.see@gmail.com> wrote:

Pardon, but I'm experiencing a different thing. I am using hdfs, and I
delete the path in the hdfs that ES is using, Then when I start up ES, ES
tries to repopulate hdfs. It's only when I delete the indices in the ES
nodes that I completely lose all my data.

On Sat, Jul 31, 2010 at 12:01 AM, Shay Banon <
shay.banon@elasticsearch.com> wrote:

No, you can't, the gateway is assumed to hold the master data. This can
be done manually by uploading what is needed it the (hopefully) rare event
that data gets lost from the gateway.

-shay.banon

On Fri, Jul 30, 2010 at 6:09 PM, Otis otis.gospodnetic@gmail.comwrote:

Hello,

On Jul 30, 7:32 am, Shay Banon shay.ba...@elasticsearch.com wrote:

Index data for a specific shard is always stored local to the node,
and used
from there. The gateway is used for long term persistency, and
basically,
the gateway service snapshots the loca index information to the
gateway for
each primary shard. If you want to know where primary shards are
allocated,
you can use the cluster state API.

If you completely delete the data on the gateway, then elasticsearch
won't
have nothing to recover from (even though there is "local"
information per
node) and should start out empty.

Shay, should it be possible to tell ES to go and repopulate the
gateway from local storage? (a reverse recovery, so to speak)
Otherwise, in the above case, if something happens to the data on the
gw, what do you do?

Thanks,
Otis

Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch
Lucene ecosystem search :: http://search-lucene.com/

On Fri, Jul 30, 2010 at 4:52 AM, Franz Allan Valencia See <

franz....@gmail.com> wrote:

@David:
Maybe although your elastichsearch was already up, it was still
trying to
repopulate your gateway's indices? I experienced that before but
for hdfs,
and it takes awhile before it can repopulate my gateway's indices.

@Shay
Pardon me for my stupid question, but what are the indices in
$ES_HOME/work/elasticsearch/nodes (and in 0.8
$ES_HOME/work/elasticsearch/indices) and the indices in the
gateway? Which
one of these are the primary shard indices and which ones are the
replicas?
..Or are these something else?

My assumption before was that all indices are stored in the
gateway. But
when I clear out my gateway (i.e. in my hdfs, when I delete the
directory I
assigned to ES and when i reformat the node), it gets repopulated
(my guess
is from $ES_HOME/work/elasticsearch/nodes. because I have to
delete those as
well clear out my entire indices).

Thanks,

Franz Allan Valencia See | Java Software Engineer
franz....@gmail.com
LinkedIn:http://www.linkedin.com/in/franzsee
Twitter:http://www.twitter.com/franz_see

On Fri, Jul 30, 2010 at 5:41 AM, Shay Banon <
shay.ba...@elasticsearch.com>wrote:

First, if you upgrade to 0.9, you will need to reindex your data.
And
upgrade to 0.9.

Second, lets get all this so called "magic" attitude out of the
way. Are
you suggesting that intentionally this information is being
hidden? If no,
then please stop. if yes, then it takes about 5 minutes to do a
search on
the mailing list / docs / talks and find the answers you are
looking for.

In any case, lets write it again... . When you create an index in
elasticsearch, the index is broken down into shards. A shard can
have 0 or
more replicas. Shards and their replicas are allocated to
different nodes,
while elasticsearch makes sure to not allocate a shard and its
replica to
the same node.

Within a single shard and its replicas, a primary is chosen. One
of the
primary main purpose is to perform the scheduled snapshot
operations from
the shard index to the gateway.

When a primary shard is first allocated to a node, it performs a
recovery
from the gateway. When a replica of the same shard is allocated,
it recovers
its state from the primary. If another node is started, and the
primary
needs to relocate to a different node to keep the number of
shards balanced
across nodes, then it will do a hot relocation, not another
recovery from
the gateway.

If you want to know where things are allocate, there is a simple
API, the
cluster state API, that gives you information about all the
different
indices, shards, replicas, where they are allocated, what their
state is and
so on.

Back to the problems you described. First, the s3 gateway. In
0.8, jclouds
was used. If you follow the mailing list, you will see that I
issued several
times warnings that it seems to be misbehaving, and I am going to
replace it
in 0.9. In 0.9, I went with the Amazon formal SDK, so hoping for
better
things now... .

Second, there were several bugs in 0.8 that were fixed in 0.9.
One of the
more major ones is the fact that when a node where a primary
shard was
allocated was shutdown, than a replica will not become primary
(as it
should), but a new primary will be allocated, and a full recovery
from the
gateway will happen. This is bad for two reasons. First, it means
that
allocation is much slower, and the second, if the gateway
misbehaves as the
jclouds case, then the recovery might not work.

Last, regarding being tested at scale. What you are testing is
not scale
(3 nodes). There are bugs, they are being fixed, but for example,
the
problems you were having are tested using simple 3 nodes
automatic
integration tests. As for scale, I do some testings on ec2, as
well as other
kind elasticsearch users who are running the system at scale and
provide
valuable information (and not magic) back to me and help fix any
problems.

-shay.banon

On Thu, Jul 29, 2010 at 10:52 PM, David Jensen <
djense...@gmail.com>wrote:

Well, I actually had a complete cluster meltdown. I'm not sure
when
the magic happened but all of my documents were restored from
the
Gateway and it was fairly snappy too but my index is only 21GB.

The magic restore either happened automatically or it happened
when I
invokedhttp://localhost:9200/indexname/_refresh. My guess would
be
the former.

On Jul 29, 12:16 pm, Berkay Mollamustafaoglu <mber...@gmail.com

wrote:

You can't restore from the gateway directly. Basically you'll
need to
read
all docs from old index and write to new one.

Regards,
Berkay Mollamustafaoglu
mberkay on yahoo, google and skype

On Thu, Jul 29, 2010 at 3:15 PM, David Jensen <
djense...@gmail.com>
wrote:

I guess the upside is that now that I've lost all 12M of my
documents
I can upgrade to 0.9.0.

How do I restore the index from the gateway?

On Jul 29, 12:01 pm, David Jensen djense...@gmail.com
wrote:

I was a little worried about the "magic" that
Elasticsearch
provides.
Here is an example why ...

After several days of indexing, I ran into an Exception
reporting
too
many open files. I fixed the issue on the system and
restarted ONE
of
my THREE nodes. Before the restart, I had 12.7M documents.
After
the
restart, I have 5.1M documents. I'm also have an S3
Gateway. It
also
turns out (likely related to the file issue) that S3
snapshot were
failing, but only this morning and I hadn't indexed
anything new
for
about 14-20 hours.

Losing indexed documents like this is worrisome and this
isn't the
first time this has happened. When I first started playing
with
Elasticsearch, I did a test where I loaded documents onto
two
servers
and dropped one of the servers. I lost half my documents.

I'm curious, how is Elasticsearch being tested at scale?

--
Franz Allan Valencia See | Java Software Engineer
franz.see@gmail.com

LinkedIn: http://www.linkedin.com/in/franzsee
Twitter: http://www.twitter.com/franz_see

It should always be in a state of aiming at green health. It might decide
not to do anything since for example there are no nodes available for
allocating shards, but once a node will join the cluster it will try and
allocate shards to it.

On Thu, Aug 5, 2010 at 4:33 AM, Franz Allan Valencia See <
franz.see@gmail.com> wrote:

The cluster state just indicates the current state right? My problem with
those states is that I am not sure whether it's still trying to fix it's
health or whether it has given up.

Thanks,

Franz Allan Valencia See | Java Software Engineer
franz.see@gmail.com
LinkedIn: http://www.linkedin.com/in/franzsee
Twitter: http://www.twitter.com/franz_see

On Thu, Aug 5, 2010 at 1:26 AM, Shay Banon shay.banon@elasticsearch.comwrote:

You can tell if it is done or not based on the cluster state, a specific
progress indication is not provided (yet). Note that with 0.9, it will try
to reuse what there is on the local fs, but you need to make sure you set
recover_after_nodes setting.

-shay.banon

On Wed, Aug 4, 2010 at 6:21 PM, Franz Allan Valencia See <
franz.see@gmail.com> wrote:

Ahh..I now know what I was doing wrong. I was clearing out the
hadoop.tmp.dir and not my actual ES gateway path. After clearing out the ES
gateway path, I can now clear out my indices. Thanks :slight_smile:

Another question though: when my ES cluster starts up, it will retrieve
indices from the gateway right? In my current setup, this takes some time to
complete. Given that, how can I keep track of my ES cluster's progress? Like
is it at 10% or at 50% or 90%? ...or can I find out whether it is still
retrieving from the gateway or whether it has finished?

Thanks,

Franz Allan Valencia See | Java Software Engineer
franz.see@gmail.com
LinkedIn: http://www.linkedin.com/in/franzsee
Twitter: http://www.twitter.com/franz_see

On Sun, Aug 1, 2010 at 1:31 AM, Shay Banon <shay.banon@elasticsearch.com

wrote:

Do you delete the data when all the cluster is down, or do you still
have nodes running? If you still have nodes running, then they will keep on
snapshotting the data to the gateway. If you delete the gateway data when
all the nodes are down, then, when you start it back up, it won't have data
to recover from.

-shay.banon

On Sat, Jul 31, 2010 at 6:33 PM, Franz Allan Valencia See <
franz.see@gmail.com> wrote:

Pardon, but I'm experiencing a different thing. I am using hdfs, and I
delete the path in the hdfs that ES is using, Then when I start up ES, ES
tries to repopulate hdfs. It's only when I delete the indices in the ES
nodes that I completely lose all my data.

On Sat, Jul 31, 2010 at 12:01 AM, Shay Banon <
shay.banon@elasticsearch.com> wrote:

No, you can't, the gateway is assumed to hold the master data. This
can be done manually by uploading what is needed it the (hopefully) rare
event that data gets lost from the gateway.

-shay.banon

On Fri, Jul 30, 2010 at 6:09 PM, Otis otis.gospodnetic@gmail.comwrote:

Hello,

On Jul 30, 7:32 am, Shay Banon shay.ba...@elasticsearch.com wrote:

Index data for a specific shard is always stored local to the node,
and used
from there. The gateway is used for long term persistency, and
basically,
the gateway service snapshots the loca index information to the
gateway for
each primary shard. If you want to know where primary shards are
allocated,
you can use the cluster state API.

If you completely delete the data on the gateway, then
elasticsearch won't
have nothing to recover from (even though there is "local"
information per
node) and should start out empty.

Shay, should it be possible to tell ES to go and repopulate the
gateway from local storage? (a reverse recovery, so to speak)
Otherwise, in the above case, if something happens to the data on the
gw, what do you do?

Thanks,
Otis

Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch
Lucene ecosystem search :: http://search-lucene.com/

On Fri, Jul 30, 2010 at 4:52 AM, Franz Allan Valencia See <

franz....@gmail.com> wrote:

@David:
Maybe although your elastichsearch was already up, it was still
trying to
repopulate your gateway's indices? I experienced that before but
for hdfs,
and it takes awhile before it can repopulate my gateway's
indices.

@Shay
Pardon me for my stupid question, but what are the indices in
$ES_HOME/work/elasticsearch/nodes (and in 0.8
$ES_HOME/work/elasticsearch/indices) and the indices in the
gateway? Which
one of these are the primary shard indices and which ones are the
replicas?
..Or are these something else?

My assumption before was that all indices are stored in the
gateway. But
when I clear out my gateway (i.e. in my hdfs, when I delete the
directory I
assigned to ES and when i reformat the node), it gets repopulated
(my guess
is from $ES_HOME/work/elasticsearch/nodes. because I have to
delete those as
well clear out my entire indices).

Thanks,

Franz Allan Valencia See | Java Software Engineer
franz....@gmail.com
LinkedIn:http://www.linkedin.com/in/franzsee
Twitter:http://www.twitter.com/franz_see

On Fri, Jul 30, 2010 at 5:41 AM, Shay Banon <
shay.ba...@elasticsearch.com>wrote:

First, if you upgrade to 0.9, you will need to reindex your
data. And
upgrade to 0.9.

Second, lets get all this so called "magic" attitude out of the
way. Are
you suggesting that intentionally this information is being
hidden? If no,
then please stop. if yes, then it takes about 5 minutes to do a
search on
the mailing list / docs / talks and find the answers you are
looking for.

In any case, lets write it again... . When you create an index
in
elasticsearch, the index is broken down into shards. A shard can
have 0 or
more replicas. Shards and their replicas are allocated to
different nodes,
while elasticsearch makes sure to not allocate a shard and its
replica to
the same node.

Within a single shard and its replicas, a primary is chosen. One
of the
primary main purpose is to perform the scheduled snapshot
operations from
the shard index to the gateway.

When a primary shard is first allocated to a node, it performs a
recovery
from the gateway. When a replica of the same shard is allocated,
it recovers
its state from the primary. If another node is started, and the
primary
needs to relocate to a different node to keep the number of
shards balanced
across nodes, then it will do a hot relocation, not another
recovery from
the gateway.

If you want to know where things are allocate, there is a simple
API, the
cluster state API, that gives you information about all the
different
indices, shards, replicas, where they are allocated, what their
state is and
so on.

Back to the problems you described. First, the s3 gateway. In
0.8, jclouds
was used. If you follow the mailing list, you will see that I
issued several
times warnings that it seems to be misbehaving, and I am going
to replace it
in 0.9. In 0.9, I went with the Amazon formal SDK, so hoping for
better
things now... .

Second, there were several bugs in 0.8 that were fixed in 0.9.
One of the
more major ones is the fact that when a node where a primary
shard was
allocated was shutdown, than a replica will not become primary
(as it
should), but a new primary will be allocated, and a full
recovery from the
gateway will happen. This is bad for two reasons. First, it
means that
allocation is much slower, and the second, if the gateway
misbehaves as the
jclouds case, then the recovery might not work.

Last, regarding being tested at scale. What you are testing is
not scale
(3 nodes). There are bugs, they are being fixed, but for
example, the
problems you were having are tested using simple 3 nodes
automatic
integration tests. As for scale, I do some testings on ec2, as
well as other
kind elasticsearch users who are running the system at scale and
provide
valuable information (and not magic) back to me and help fix any
problems.

-shay.banon

On Thu, Jul 29, 2010 at 10:52 PM, David Jensen <
djense...@gmail.com>wrote:

Well, I actually had a complete cluster meltdown. I'm not sure
when
the magic happened but all of my documents were restored from
the
Gateway and it was fairly snappy too but my index is only 21GB.

The magic restore either happened automatically or it happened
when I
invokedhttp://localhost:9200/indexname/_refresh. My guess would
be
the former.

On Jul 29, 12:16 pm, Berkay Mollamustafaoglu <
mber...@gmail.com>
wrote:

You can't restore from the gateway directly. Basically you'll
need to
read
all docs from old index and write to new one.

Regards,
Berkay Mollamustafaoglu
mberkay on yahoo, google and skype

On Thu, Jul 29, 2010 at 3:15 PM, David Jensen <
djense...@gmail.com>
wrote:

I guess the upside is that now that I've lost all 12M of my
documents
I can upgrade to 0.9.0.

How do I restore the index from the gateway?

On Jul 29, 12:01 pm, David Jensen djense...@gmail.com
wrote:

I was a little worried about the "magic" that
Elasticsearch
provides.
Here is an example why ...

After several days of indexing, I ran into an Exception
reporting
too
many open files. I fixed the issue on the system and
restarted ONE
of
my THREE nodes. Before the restart, I had 12.7M
documents. After
the
restart, I have 5.1M documents. I'm also have an S3
Gateway. It
also
turns out (likely related to the file issue) that S3
snapshot were
failing, but only this morning and I hadn't indexed
anything new
for
about 14-20 hours.

Losing indexed documents like this is worrisome and this
isn't the
first time this has happened. When I first started
playing with
Elasticsearch, I did a test where I loaded documents onto
two
servers
and dropped one of the servers. I lost half my documents.

I'm curious, how is Elasticsearch being tested at scale?

--
Franz Allan Valencia See | Java Software Engineer
franz.see@gmail.com

LinkedIn: http://www.linkedin.com/in/franzsee
Twitter: http://www.twitter.com/franz_see