Repeating Error -- No commit point data is available in gateway


(Kenneth Loafman) #1

Hi,

I'm getting the following block repeating... http://gist.github.com/568753

My guess is that no recovery is possible at this point. Is that
correct, or is there some magic available?

...Thanks,
...ken


(Shay Banon) #2

Yea, seems like files got deleted in the gateway storage. Which one type you
using? In any case, the mentioned index will not be able to recover... .

On Tue, Sep 7, 2010 at 9:10 PM, Kenneth Loafman
kenneth.loafman@gmail.comwrote:

Hi,

I'm getting the following block repeating... http://gist.github.com/568753

My guess is that no recovery is possible at this point. Is that
correct, or is there some magic available?

...Thanks,
...ken


(Kenneth Loafman) #3

S3 gateway.

This is the 2nd time this has happened. I'm guessing more along the
lines that the file never made it to S3 rather than getting deleted
accidentally.

Anyone have any experience with Rackspace cloud service? Reliable?

Shay Banon wrote:

Yea, seems like files got deleted in the gateway storage. Which one type
you using? In any case, the mentioned index will not be able to recover... .

On Tue, Sep 7, 2010 at 9:10 PM, Kenneth Loafman
<kenneth.loafman@gmail.com mailto:kenneth.loafman@gmail.com> wrote:

Hi,

I'm getting the following block repeating...
http://gist.github.com/568753

My guess is that no recovery is possible at this point.  Is that
correct, or is there some magic available?

...Thanks,
...ken

(Shay Banon) #4

The problem might be in several different places. The first is
elasticsearch, it might have either not written the file, or deleted it by
mistake. Commit points are only written after all the files within it are
written, and files are deleted once there is not a single commit point
pointing to it. I can't recreate what you are getting, so its hard for me to
track down a possible problem in elasticsearch.

The other place the problem might be is the amazon SDK, that for some reason
does not upload the files correctly, or does not list the resulting files
correctly. Not sure... .

Last, the objects might have been lost on the s3. I would say (hope) that is
the most unlikely case.

I will continue to try and track down this problem and report back.

Regarding rackspace and cloudfiles, I am not sure if its better or not. In
any case, I need to write support for cloudfiles, and auto disco for
cloudservers. This is on my roadmap for a cloud-rackspace plugin.

-shay.banon

On Wed, Sep 8, 2010 at 3:26 PM, Kenneth Loafman
kenneth.loafman@gmail.comwrote:

S3 gateway.

This is the 2nd time this has happened. I'm guessing more along the
lines that the file never made it to S3 rather than getting deleted
accidentally.

Anyone have any experience with Rackspace cloud service? Reliable?

Shay Banon wrote:

Yea, seems like files got deleted in the gateway storage. Which one type
you using? In any case, the mentioned index will not be able to
recover... .

On Tue, Sep 7, 2010 at 9:10 PM, Kenneth Loafman
<kenneth.loafman@gmail.com mailto:kenneth.loafman@gmail.com> wrote:

Hi,

I'm getting the following block repeating...
http://gist.github.com/568753

My guess is that no recovery is possible at this point.  Is that
correct, or is there some magic available?

...Thanks,
...ken

(Kenneth Loafman) #5

Just a note on S3 files listing... 10k files seems to be the limit for
their API. Is it possible there were that many files to list?

...Ken

Shay Banon wrote:

The problem might be in several different places. The first is
elasticsearch, it might have either not written the file, or deleted it
by mistake. Commit points are only written after all the files within it
are written, and files are deleted once there is not a single commit
point pointing to it. I can't recreate what you are getting, so its hard
for me to track down a possible problem in elasticsearch.

The other place the problem might be is the amazon SDK, that for some
reason does not upload the files correctly, or does not list the
resulting files correctly. Not sure... .

Last, the objects might have been lost on the s3. I would say (hope)
that is the most unlikely case.

I will continue to try and track down this problem and report back.

Regarding rackspace and cloudfiles, I am not sure if its better or not.
In any case, I need to write support for cloudfiles, and auto disco for
cloudservers. This is on my roadmap for a cloud-rackspace plugin.

-shay.banon

On Wed, Sep 8, 2010 at 3:26 PM, Kenneth Loafman
<kenneth.loafman@gmail.com mailto:kenneth.loafman@gmail.com> wrote:

S3 gateway.

This is the 2nd time this has happened.  I'm guessing more along the
lines that the file never made it to S3 rather than getting deleted
accidentally.

Anyone have any experience with Rackspace cloud service?  Reliable?

Shay Banon wrote:
> Yea, seems like files got deleted in the gateway storage. Which
one type
> you using? In any case, the mentioned index will not be able to
recover... .
>
> On Tue, Sep 7, 2010 at 9:10 PM, Kenneth Loafman
> <kenneth.loafman@gmail.com <mailto:kenneth.loafman@gmail.com>
<mailto:kenneth.loafman@gmail.com
<mailto:kenneth.loafman@gmail.com>>> wrote:
>
>     Hi,
>
>     I'm getting the following block repeating...
>     http://gist.github.com/568753
>
>     My guess is that no recovery is possible at this point.  Is that
>     correct, or is there some magic available?
>
>     ...Thanks,
>     ...ken
>
>

(Shay Banon) #6

I don't think that there are 10k files, but even if thats the case, when
elasticsearch lists blobs on s3, the listing process gets repeated with
proper markers if the result is marked as truncated.

On Wed, Sep 8, 2010 at 11:33 PM, Kenneth Loafman
kenneth.loafman@gmail.comwrote:

Just a note on S3 files listing... 10k files seems to be the limit for
their API. Is it possible there were that many files to list?

...Ken

Shay Banon wrote:

The problem might be in several different places. The first is
elasticsearch, it might have either not written the file, or deleted it
by mistake. Commit points are only written after all the files within it
are written, and files are deleted once there is not a single commit
point pointing to it. I can't recreate what you are getting, so its hard
for me to track down a possible problem in elasticsearch.

The other place the problem might be is the amazon SDK, that for some
reason does not upload the files correctly, or does not list the
resulting files correctly. Not sure... .

Last, the objects might have been lost on the s3. I would say (hope)
that is the most unlikely case.

I will continue to try and track down this problem and report back.

Regarding rackspace and cloudfiles, I am not sure if its better or not.
In any case, I need to write support for cloudfiles, and auto disco for
cloudservers. This is on my roadmap for a cloud-rackspace plugin.

-shay.banon

On Wed, Sep 8, 2010 at 3:26 PM, Kenneth Loafman
<kenneth.loafman@gmail.com mailto:kenneth.loafman@gmail.com> wrote:

S3 gateway.

This is the 2nd time this has happened.  I'm guessing more along the
lines that the file never made it to S3 rather than getting deleted
accidentally.

Anyone have any experience with Rackspace cloud service?  Reliable?

Shay Banon wrote:
> Yea, seems like files got deleted in the gateway storage. Which
one type
> you using? In any case, the mentioned index will not be able to
recover... .
>
> On Tue, Sep 7, 2010 at 9:10 PM, Kenneth Loafman
> <kenneth.loafman@gmail.com <mailto:kenneth.loafman@gmail.com>
<mailto:kenneth.loafman@gmail.com
<mailto:kenneth.loafman@gmail.com>>> wrote:
>
>     Hi,
>
>     I'm getting the following block repeating...
>     http://gist.github.com/568753
>
>     My guess is that no recovery is possible at this point.  Is

that

>     correct, or is there some magic available?
>
>     ...Thanks,
>     ...ken
>
>

(system) #7