Blob size with S3

I added a new machine to my cluster of 4, running elasticsearch 0.7.1, using the cloud plugin for gateway and discovery. I noticed a lot of these coming through the logs:

[16:22:25,309][WARN ][index.gateway ] [Random][chatter-dev][1] Failed to snapshot (scheduled)
org.elasticsearch.index.gateway.IndexShardGatewaySnapshotFailedException: [chatter-dev][1] Failed to perform snapshot (index files)
at org.elasticsearch.index.gateway.cloud.CloudIndexShardGateway.snapshot(CloudIndexShardGateway.java:218)
at org.elasticsearch.index.gateway.IndexShardGatewayService$1.snapshot(IndexShardGatewayService.java:179)
at org.elasticsearch.index.gateway.IndexShardGatewayService$1.snapshot(IndexShardGatewayService.java:175)
at org.elasticsearch.index.engine.robin.RobinEngine.snapshot(RobinEngine.java:348)
at org.elasticsearch.index.shard.service.InternalIndexShard.snapshot(InternalIndexShard.java:377)
at org.elasticsearch.index.gateway.IndexShardGatewayService.snapshot(IndexShardGatewayService.java:175)
at org.elasticsearch.index.gateway.IndexShardGatewayService$SnapshotRunnable.run(IndexShardGatewayService.java:257)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:351)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:178)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:165)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:267)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:636)
Caused by: java.lang.IllegalArgumentException: maximum size for put object is 5GB
at com.google.common.base.Preconditions.checkArgument(Preconditions.java:88)
at org.jclouds.aws.s3.binders.BindS3ObjectToPayload.bindToRequest(BindS3ObjectToPayload.java:47)
at org.jclouds.rest.internal.RestAnnotationProcessor.decorateRequest(RestAnnotationProcessor.java:808)
at org.jclouds.rest.internal.RestAnnotationProcessor.createRequest(RestAnnotationProcessor.java:399)
at org.jclouds.rest.internal.AsyncRestClientProxy.createFuture(AsyncRestClientProxy.java:104)
at org.jclouds.rest.internal.AsyncRestClientProxy.invoke(AsyncRestClientProxy.java:86)
at $Proxy79.putObject(Unknown Source)
at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:616)
at org.jclouds.concurrent.internal.SyncProxy.invoke(SyncProxy.java:121)
at $Proxy80.putObject(Unknown Source)
at org.jclouds.aws.s3.blobstore.S3BlobStore.putBlob(S3BlobStore.java:234)
at org.elasticsearch.index.gateway.cloud.CloudIndexShardGateway.copyFromDirectory(CloudIndexShardGateway.java:489)
at org.elasticsearch.index.gateway.cloud.CloudIndexShardGateway.access$000(CloudIndexShardGateway.java:73)
at org.elasticsearch.index.gateway.cloud.CloudIndexShardGateway$1.run(CloudIndexShardGateway.java:203)
... 3 more

I was under the impression that ElasticSearch would manage the splitting of files so that they could happily be stored on S3, or am I reading this error wrong?

Andrew
Andrew Harvey / Developer
lexer
m/
t/ +61 2 9019 6379
w/ http://lexer.com.au
Help put an end to whaling. Visit http://www.givewhalesavoice.com.au/

Please consider the environment before printing this email
This email transmission is confidential and intended solely for the person or organisation to whom it is addressed. If you are not the intended recipient, you must not copy, distribute or disseminate the information, or take any action in relation to it and please delete this e-mail. Any views expressed in this message are those of the individual sender, except where the send specifically states them to be the views of any organisation or employer. If you have received this message in error, do not open any attachment but please notify the sender (above). This message has been checked for all known viruses powered by McAfee.

For further information visit http://www.mcafee.com/us/threat_center/default.asp
Please rely on your own virus check as no responsibility is taken by the sender for any damage rising out of any virus infection this communication may contain.

This message has been scanned for malware by Websense. www.websense.com

Strange. There is a gateway.cloud.chunk_size setting, which defaults to 4G.
I have tested this and it worked, even for very small chunk sizes. Let me
run a test and see... . Can you open an issue for this?

On Fri, May 21, 2010 at 9:24 AM, Andrew Harvey
Andrew.Harvey@lexer.com.auwrote:

I added a new machine to my cluster of 4, running elasticsearch 0.7.1,
using the cloud plugin for gateway and discovery. I noticed a lot of these
coming through the logs:

[16:22:25,309][WARN ][index.gateway ] [Random][chatter-dev][1]
Failed to snapshot (scheduled)
org.elasticsearch.index.gateway.IndexShardGatewaySnapshotFailedException:
[chatter-dev][1] Failed to perform snapshot (index files)
at
org.elasticsearch.index.gateway.cloud.CloudIndexShardGateway.snapshot(CloudIndexShardGateway.java:218)
at
org.elasticsearch.index.gateway.IndexShardGatewayService$1.snapshot(IndexShardGatewayService.java:179)
at
org.elasticsearch.index.gateway.IndexShardGatewayService$1.snapshot(IndexShardGatewayService.java:175)
at
org.elasticsearch.index.engine.robin.RobinEngine.snapshot(RobinEngine.java:348)
at
org.elasticsearch.index.shard.service.InternalIndexShard.snapshot(InternalIndexShard.java:377)
at
org.elasticsearch.index.gateway.IndexShardGatewayService.snapshot(IndexShardGatewayService.java:175)
at
org.elasticsearch.index.gateway.IndexShardGatewayService$SnapshotRunnable.run(IndexShardGatewayService.java:257)
at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at
java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:351)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:178)
at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:165)
at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:267)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:636)
Caused by: java.lang.IllegalArgumentException: maximum size for put object
is 5GB
at
com.google.common.base.Preconditions.checkArgument(Preconditions.java:88)
at
org.jclouds.aws.s3.binders.BindS3ObjectToPayload.bindToRequest(BindS3ObjectToPayload.java:47)
at
org.jclouds.rest.internal.RestAnnotationProcessor.decorateRequest(RestAnnotationProcessor.java:808)
at
org.jclouds.rest.internal.RestAnnotationProcessor.createRequest(RestAnnotationProcessor.java:399)
at
org.jclouds.rest.internal.AsyncRestClientProxy.createFuture(AsyncRestClientProxy.java:104)
at
org.jclouds.rest.internal.AsyncRestClientProxy.invoke(AsyncRestClientProxy.java:86)
at $Proxy79.putObject(Unknown Source)
at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:616)
at
org.jclouds.concurrent.internal.SyncProxy.invoke(SyncProxy.java:121)
at $Proxy80.putObject(Unknown Source)
at
org.jclouds.aws.s3.blobstore.S3BlobStore.putBlob(S3BlobStore.java:234)
at
org.elasticsearch.index.gateway.cloud.CloudIndexShardGateway.copyFromDirectory(CloudIndexShardGateway.java:489)
at
org.elasticsearch.index.gateway.cloud.CloudIndexShardGateway.access$000(CloudIndexShardGateway.java:73)
at
org.elasticsearch.index.gateway.cloud.CloudIndexShardGateway$1.run(CloudIndexShardGateway.java:203)
... 3 more

I was under the impression that Elasticsearch would manage the splitting of
files so that they could happily be stored on S3, or am I reading this error
wrong?

Andrew
Andrew Harvey / Developer
lexer
m/
t/ +61 2 9019 6379
w/ http://lexer.com.au
Help put an end to whaling. Visit http://www.givewhalesavoice.com.au/

Please consider the environment before printing this email
This email transmission is confidential and intended solely for the person
or organisation to whom it is addressed. If you are not the intended
recipient, you must not copy, distribute or disseminate the information, or
take any action in relation to it and please delete this e-mail. Any views
expressed in this message are those of the individual sender, except where
the send specifically states them to be the views of any organisation or
employer. If you have received this message in error, do not open any
attachment but please notify the sender (above). This message has been
checked for all known viruses powered by McAfee.

For further information visit
Advanced Research Center | Trellix
Please rely on your own virus check as no responsibility is taken by the
sender for any damage rising out of any virus infection this communication
may contain.

This message has been scanned for malware by Websense. www.websense.com

Sure. I'm finishing up for the week (it's 5pm on friday here) but I'll have a look at it over the weekend.

Andrew

On 21/05/2010, at 4:59 PM, Shay Banon wrote:

Strange. There is a gateway.cloud.chunk_size setting, which defaults to 4G. I have tested this and it worked, even for very small chunk sizes. Let me run a test and see... . Can you open an issue for this?

On Fri, May 21, 2010 at 9:24 AM, Andrew Harvey <Andrew.Harvey@lexer.com.aumailto:Andrew.Harvey@lexer.com.au> wrote:
I added a new machine to my cluster of 4, running elasticsearch 0.7.1, using the cloud plugin for gateway and discovery. I noticed a lot of these coming through the logs:

[16:22:25,309][WARN ][index.gateway ] [Random][chatter-dev][1] Failed to snapshot (scheduled)
org.elasticsearch.index.gateway.IndexShardGatewaySnapshotFailedException: [chatter-dev][1] Failed to perform snapshot (index files)
at org.elasticsearch.index.gateway.cloud.CloudIndexShardGateway.snapshot(CloudIndexShardGateway.java:218)
at org.elasticsearch.index.gateway.IndexShardGatewayService$1.snapshot(IndexShardGatewayService.java:179)
at org.elasticsearch.index.gateway.IndexShardGatewayService$1.snapshot(IndexShardGatewayService.java:175)
at org.elasticsearch.index.engine.robin.RobinEngine.snapshot(RobinEngine.java:348)
at org.elasticsearch.index.shard.service.InternalIndexShard.snapshot(InternalIndexShard.java:377)
at org.elasticsearch.index.gateway.IndexShardGatewayService.snapshot(IndexShardGatewayService.java:175)
at org.elasticsearch.index.gateway.IndexShardGatewayService$SnapshotRunnable.run(IndexShardGatewayService.java:257)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:351)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:178)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:165)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:267)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:636)
Caused by: java.lang.IllegalArgumentException: maximum size for put object is 5GB
at com.google.common.base.Preconditions.checkArgument(Preconditions.java:88)
at org.jclouds.aws.s3.binders.BindS3ObjectToPayload.bindToRequest(BindS3ObjectToPayload.java:47)
at org.jclouds.rest.internal.RestAnnotationProcessor.decorateRequest(RestAnnotationProcessor.java:808)
at org.jclouds.rest.internal.RestAnnotationProcessor.createRequest(RestAnnotationProcessor.java:399)
at org.jclouds.rest.internal.AsyncRestClientProxy.createFuture(AsyncRestClientProxy.java:104)
at org.jclouds.rest.internal.AsyncRestClientProxy.invoke(AsyncRestClientProxy.java:86)
at $Proxy79.putObject(Unknown Source)
at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:616)
at org.jclouds.concurrent.internal.SyncProxy.invoke(SyncProxy.java:121)
at $Proxy80.putObject(Unknown Source)
at org.jclouds.aws.s3.blobstore.S3BlobStore.putBlob(S3BlobStore.java:234)
at org.elasticsearch.index.gateway.cloud.CloudIndexShardGateway.copyFromDirectory(CloudIndexShardGateway.java:489)
at org.elasticsearch.index.gateway.cloud.CloudIndexShardGateway.access$000(CloudIndexShardGateway.java:73)
at org.elasticsearch.index.gateway.cloud.CloudIndexShardGateway$1.run(CloudIndexShardGateway.java:203)
... 3 more

I was under the impression that ElasticSearch would manage the splitting of files so that they could happily be stored on S3, or am I reading this error wrong?

Andrew
Andrew Harvey / Developer
lexer
m/
t/ +61 2 9019 6379
w/ http://lexer.com.auhttp://lexer.com.au/
Help put an end to whaling. Visit http://www.givewhalesavoice.com.au/

Please consider the environment before printing this email
This email transmission is confidential and intended solely for the person or organisation to whom it is addressed. If you are not the intended recipient, you must not copy, distribute or disseminate the information, or take any action in relation to it and please delete this e-mail. Any views expressed in this message are those of the individual sender, except where the send specifically states them to be the views of any organisation or employer. If you have received this message in error, do not open any attachment but please notify the sender (above). This message has been checked for all known viruses powered by McAfee.

For further information visit http://www.mcafee.com/us/threat_center/default.asp
Please rely on your own virus check as no responsibility is taken by the sender for any damage rising out of any virus infection this communication may contain.

This message has been scanned for malware by Websense. www.websense.comhttp://www.websense.com/

Click herehttps://www.mailcontrol.com/sr/zhZ0ViE7NxnTndxI!oX7UnIFdu++nxPn!GMt0dTtOQ9SCZkeForBjczdfCMdFMcIygjI3nQgOK1ca1YcpG8Zdg== to report this email as spam.

Andrew Harvey / Developer
lexer

m/
t/ +61 2 9019 6379
w/ http://lexer.com.au

Help put an end to whaling. Visit www.givewhalesavoice.com.auhttp://www.givewhalesavoice.com.au/


Please consider the environment before printing this email
This email transmission is confidential and intended solely for the person or organisation to whom it is addressed. If you are not the intended recipient, you must not copy, distribute or disseminate the information, or take any action in relation to it and please delete this e-mail. Any views expressed in this message are those of the individual sender, except where the send specifically states them to be the views of any organisation or employer. If you have received this message in error, do not open any attachment but please notify the sender (above). This message has been checked for all known viruses powered by McAfee.

For further information visit http://www.mcafee.com/us/threat_center/default.asp
Please rely on your own virus check as no responsibility is taken by the sender for any damage rising out of any virus infection this communication may contain.

Hi Andrew,

Well, took some time, but found the problem. Its in the library
elasticsearch uses to do the S3 operations. Basically, the check for the
maximum length allowed is done on the library level, and it overflows on int
:), which means that with the current version of it, you will get exceptions
above for files above 2g, and because elasticsearch uses, by default, a
chunk size of 4g (i.e. files above 4g will be chunked), you get this
problem.

For now, you can simply set gateway.cloud.chunk_size to 1.5g. This should
solve this.

cheers,
shay.banon

On Fri, May 21, 2010 at 10:02 AM, Andrew Harvey
Andrew.Harvey@lexer.com.auwrote:

Sure. I'm finishing up for the week (it's 5pm on friday here) but I'll
have a look at it over the weekend.

Andrew

On 21/05/2010, at 4:59 PM, Shay Banon wrote:

Strange. There is a gateway.cloud.chunk_size setting, which defaults to 4G.
I have tested this and it worked, even for very small chunk sizes. Let me
run a test and see... . Can you open an issue for this?

On Fri, May 21, 2010 at 9:24 AM, Andrew Harvey <Andrew.Harvey@lexer.com.au

wrote:

I added a new machine to my cluster of 4, running elasticsearch 0.7.1,
using the cloud plugin for gateway and discovery. I noticed a lot of these
coming through the logs:

[16:22:25,309][WARN ][index.gateway ] [Random][chatter-dev][1]
Failed to snapshot (scheduled)
org.elasticsearch.index.gateway.IndexShardGatewaySnapshotFailedException:
[chatter-dev][1] Failed to perform snapshot (index files)
at
org.elasticsearch.index.gateway.cloud.CloudIndexShardGateway.snapshot(CloudIndexShardGateway.java:218)
at
org.elasticsearch.index.gateway.IndexShardGatewayService$1.snapshot(IndexShardGatewayService.java:179)
at
org.elasticsearch.index.gateway.IndexShardGatewayService$1.snapshot(IndexShardGatewayService.java:175)
at
org.elasticsearch.index.engine.robin.RobinEngine.snapshot(RobinEngine.java:348)
at
org.elasticsearch.index.shard.service.InternalIndexShard.snapshot(InternalIndexShard.java:377)
at
org.elasticsearch.index.gateway.IndexShardGatewayService.snapshot(IndexShardGatewayService.java:175)
at
org.elasticsearch.index.gateway.IndexShardGatewayService$SnapshotRunnable.run(IndexShardGatewayService.java:257)
at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at
java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:351)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:178)
at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:165)
at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:267)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:636)
Caused by: java.lang.IllegalArgumentException: maximum size for put object
is 5GB
at
com.google.common.base.Preconditions.checkArgument(Preconditions.java:88)
at
org.jclouds.aws.s3.binders.BindS3ObjectToPayload.bindToRequest(BindS3ObjectToPayload.java:47)
at
org.jclouds.rest.internal.RestAnnotationProcessor.decorateRequest(RestAnnotationProcessor.java:808)
at
org.jclouds.rest.internal.RestAnnotationProcessor.createRequest(RestAnnotationProcessor.java:399)
at
org.jclouds.rest.internal.AsyncRestClientProxy.createFuture(AsyncRestClientProxy.java:104)
at
org.jclouds.rest.internal.AsyncRestClientProxy.invoke(AsyncRestClientProxy.java:86)
at $Proxy79.putObject(Unknown Source)
at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:616)
at
org.jclouds.concurrent.internal.SyncProxy.invoke(SyncProxy.java:121)
at $Proxy80.putObject(Unknown Source)
at
org.jclouds.aws.s3.blobstore.S3BlobStore.putBlob(S3BlobStore.java:234)
at
org.elasticsearch.index.gateway.cloud.CloudIndexShardGateway.copyFromDirectory(CloudIndexShardGateway.java:489)
at
org.elasticsearch.index.gateway.cloud.CloudIndexShardGateway.access$000(CloudIndexShardGateway.java:73)
at
org.elasticsearch.index.gateway.cloud.CloudIndexShardGateway$1.run(CloudIndexShardGateway.java:203)
... 3 more

I was under the impression that Elasticsearch would manage the splitting
of files so that they could happily be stored on S3, or am I reading this
error wrong?

Andrew
Andrew Harvey / Developer
lexer
m/
t/ +61 2 9019 6379
w/ http://lexer.com.au
Help put an end to whaling. Visit http://www.givewhalesavoice.com.au/

Please consider the environment before printing this email
This email transmission is confidential and intended solely for the person
or organisation to whom it is addressed. If you are not the intended
recipient, you must not copy, distribute or disseminate the information, or
take any action in relation to it and please delete this e-mail. Any views
expressed in this message are those of the individual sender, except where
the send specifically states them to be the views of any organisation or
employer. If you have received this message in error, do not open any
attachment but please notify the sender (above). This message has been
checked for all known viruses powered by McAfee.

For further information visit
Advanced Research Center | Trellix
Please rely on your own virus check as no responsibility is taken by the
sender for any damage rising out of any virus infection this communication
may contain.

This message has been scanned for malware by Websense. www.websense.com

Click herehttps://www.mailcontrol.com/sr/zhZ0ViE7NxnTndxI!oX7UnIFdu++nxPn!GMt0dTtOQ9SCZkeForBjczdfCMdFMcIygjI3nQgOK1ca1YcpG8Zdg==to report this email as spam.

Andrew Harvey / Developer lexer

m/ t/ +61 2 9019 6379 w/ http://lexer.com.au

Help put an end to whaling. Visit www.givewhalesavoice.com.au

Please consider the environment before printing this email
This email transmission is confidential and intended solely for the person
or organisation to whom it is addressed. If you are not the intended
recipient, you must not copy, distribute or disseminate the information, or
take any action in relation to it and please delete this e-mail. Any views
expressed in this message are those of the individual sender, except where
the send specifically states them to be the views of any organisation or
employer. If you have received this message in error, do not open any
attachment but please notify the sender (above). This message has been
checked for all known viruses powered by McAfee.

For further information visit
Advanced Research Center | Trellix
Please rely on your own virus check as no responsibility is taken by the
sender for any damage rising out of any virus infection this communication
may contain.

Hi,

Just pushed Cloud Plugin: Cloud gateway default chunk size change to 1g · Issue #186 · elastic/elasticsearch · GitHub.
Basically, I changed the defaule chunk size of a blob to 1g. Meaning that
files above this value will be broken down to chunks of 1g each. I am still
considering what the best value for this is. Since chunks are loaded to the
cloud in parallel, this can actually increase the speed of snapshotting, so
for example, a chunk size of 100m will means even faster snapshotting. For
now, its 1g. The nice thing about it is that this can be changed between
runs with live snapshot on the cloud that worked before with a different
chunk size.

cheers,
shay.banon

On Fri, May 21, 2010 at 8:38 PM, Shay Banon shay.banon@elasticsearch.comwrote:

Hi Andrew,

Well, took some time, but found the problem. Its in the library
elasticsearch uses to do the S3 operations. Basically, the check for the
maximum length allowed is done on the library level, and it overflows on int
:), which means that with the current version of it, you will get exceptions
above for files above 2g, and because elasticsearch uses, by default, a
chunk size of 4g (i.e. files above 4g will be chunked), you get this
problem.

For now, you can simply set gateway.cloud.chunk_size to 1.5g. This
should solve this.

cheers,
shay.banon

On Fri, May 21, 2010 at 10:02 AM, Andrew Harvey <
Andrew.Harvey@lexer.com.au> wrote:

Sure. I'm finishing up for the week (it's 5pm on friday here) but I'll
have a look at it over the weekend.

Andrew

On 21/05/2010, at 4:59 PM, Shay Banon wrote:

Strange. There is a gateway.cloud.chunk_size setting, which defaults to
4G. I have tested this and it worked, even for very small chunk sizes. Let
me run a test and see... . Can you open an issue for this?

On Fri, May 21, 2010 at 9:24 AM, Andrew Harvey <
Andrew.Harvey@lexer.com.au> wrote:

I added a new machine to my cluster of 4, running elasticsearch 0.7.1,
using the cloud plugin for gateway and discovery. I noticed a lot of these
coming through the logs:

[16:22:25,309][WARN ][index.gateway ] [Random][chatter-dev][1]
Failed to snapshot (scheduled)
org.elasticsearch.index.gateway.IndexShardGatewaySnapshotFailedException:
[chatter-dev][1] Failed to perform snapshot (index files)
at
org.elasticsearch.index.gateway.cloud.CloudIndexShardGateway.snapshot(CloudIndexShardGateway.java:218)
at
org.elasticsearch.index.gateway.IndexShardGatewayService$1.snapshot(IndexShardGatewayService.java:179)
at
org.elasticsearch.index.gateway.IndexShardGatewayService$1.snapshot(IndexShardGatewayService.java:175)
at
org.elasticsearch.index.engine.robin.RobinEngine.snapshot(RobinEngine.java:348)
at
org.elasticsearch.index.shard.service.InternalIndexShard.snapshot(InternalIndexShard.java:377)
at
org.elasticsearch.index.gateway.IndexShardGatewayService.snapshot(IndexShardGatewayService.java:175)
at
org.elasticsearch.index.gateway.IndexShardGatewayService$SnapshotRunnable.run(IndexShardGatewayService.java:257)
at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at
java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:351)
at
java.util.concurrent.FutureTask.runAndReset(FutureTask.java:178)
at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:165)
at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:267)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:636)
Caused by: java.lang.IllegalArgumentException: maximum size for put
object is 5GB
at
com.google.common.base.Preconditions.checkArgument(Preconditions.java:88)
at
org.jclouds.aws.s3.binders.BindS3ObjectToPayload.bindToRequest(BindS3ObjectToPayload.java:47)
at
org.jclouds.rest.internal.RestAnnotationProcessor.decorateRequest(RestAnnotationProcessor.java:808)
at
org.jclouds.rest.internal.RestAnnotationProcessor.createRequest(RestAnnotationProcessor.java:399)
at
org.jclouds.rest.internal.AsyncRestClientProxy.createFuture(AsyncRestClientProxy.java:104)
at
org.jclouds.rest.internal.AsyncRestClientProxy.invoke(AsyncRestClientProxy.java:86)
at $Proxy79.putObject(Unknown Source)
at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:616)
at
org.jclouds.concurrent.internal.SyncProxy.invoke(SyncProxy.java:121)
at $Proxy80.putObject(Unknown Source)
at
org.jclouds.aws.s3.blobstore.S3BlobStore.putBlob(S3BlobStore.java:234)
at
org.elasticsearch.index.gateway.cloud.CloudIndexShardGateway.copyFromDirectory(CloudIndexShardGateway.java:489)
at
org.elasticsearch.index.gateway.cloud.CloudIndexShardGateway.access$000(CloudIndexShardGateway.java:73)
at
org.elasticsearch.index.gateway.cloud.CloudIndexShardGateway$1.run(CloudIndexShardGateway.java:203)
... 3 more

I was under the impression that Elasticsearch would manage the splitting
of files so that they could happily be stored on S3, or am I reading this
error wrong?

Andrew
Andrew Harvey / Developer
lexer
m/
t/ +61 2 9019 6379
w/ http://lexer.com.au
Help put an end to whaling. Visit http://www.givewhalesavoice.com.au/


Please consider the environment before printing this email
This email transmission is confidential and intended solely for the
person or organisation to whom it is addressed. If you are not the intended
recipient, you must not copy, distribute or disseminate the information, or
take any action in relation to it and please delete this e-mail. Any views
expressed in this message are those of the individual sender, except where
the send specifically states them to be the views of any organisation or
employer. If you have received this message in error, do not open any
attachment but please notify the sender (above). This message has been
checked for all known viruses powered by McAfee.

For further information visit
Advanced Research Center | Trellix
Please rely on your own virus check as no responsibility is taken by the
sender for any damage rising out of any virus infection this communication
may contain.

This message has been scanned for malware by Websense. www.websense.com

Click herehttps://www.mailcontrol.com/sr/zhZ0ViE7NxnTndxI!oX7UnIFdu++nxPn!GMt0dTtOQ9SCZkeForBjczdfCMdFMcIygjI3nQgOK1ca1YcpG8Zdg==to report this email as spam.

Andrew Harvey / Developer lexer

m/ t/ +61 2 9019 6379 w/ http://lexer.com.au

Help put an end to whaling. Visit www.givewhalesavoice.com.au

Please consider the environment before printing this email
This email transmission is confidential and intended solely for the person
or organisation to whom it is addressed. If you are not the intended
recipient, you must not copy, distribute or disseminate the information, or
take any action in relation to it and please delete this e-mail. Any views
expressed in this message are those of the individual sender, except where
the send specifically states them to be the views of any organisation or
employer. If you have received this message in error, do not open any
attachment but please notify the sender (above). This message has been
checked for all known viruses powered by McAfee.

For further information visit
Advanced Research Center | Trellix
Please rely on your own virus check as no responsibility is taken by the
sender for any damage rising out of any virus infection this communication
may contain.

I changed my config to be the following:

gateway:
type: cloud
cloud:
chunk_size: 1.5g
container: xxx

And I'm still getting these errors. I'd love to try out the HEAD, but I just can't afford the time at the moment (redeploying a cluster and changing settings for our application isn't a small task) I'm prepared to throw away the gateway data for 0.7.2, but at some point I need some level of stability in this area.

Andrew

On 22/05/2010, at 3:38 AM, Shay Banon wrote:

Hi Andrew,

Well, took some time, but found the problem. Its in the library elasticsearch uses to do the S3 operations. Basically, the check for the maximum length allowed is done on the library level, and it overflows on int :), which means that with the current version of it, you will get exceptions above for files above 2g, and because elasticsearch uses, by default, a chunk size of 4g (i.e. files above 4g will be chunked), you get this problem.

For now, you can simply set gateway.cloud.chunk_size to 1.5g. This should solve this.

cheers,
shay.banon

On Fri, May 21, 2010 at 10:02 AM, Andrew Harvey <Andrew.Harvey@lexer.com.aumailto:Andrew.Harvey@lexer.com.au> wrote:

Sure. I'm finishing up for the week (it's 5pm on friday here) but I'll have a look at it over the weekend.

Andrew

On 21/05/2010, at 4:59 PM, Shay Banon wrote:

Strange. There is a gateway.cloud.chunk_size setting, which defaults to 4G. I have tested this and it worked, even for very small chunk sizes. Let me run a test and see... . Can you open an issue for this?

On Fri, May 21, 2010 at 9:24 AM, Andrew Harvey <Andrew.Harvey@lexer.com.aumailto:Andrew.Harvey@lexer.com.au> wrote:
I added a new machine to my cluster of 4, running elasticsearch 0.7.1, using the cloud plugin for gateway and discovery. I noticed a lot of these coming through the logs:

[16:22:25,309][WARN ][index.gateway ] [Random][chatter-dev][1] Failed to snapshot (scheduled)
org.elasticsearch.index.gateway.IndexShardGatewaySnapshotFailedException: [chatter-dev][1] Failed to perform snapshot (index files)
at org.elasticsearch.index.gateway.cloud.CloudIndexShardGateway.snapshot(CloudIndexShardGateway.java:218)
at org.elasticsearch.index.gateway.IndexShardGatewayService$1.snapshot(IndexShardGatewayService.java:179)
at org.elasticsearch.index.gateway.IndexShardGatewayService$1.snapshot(IndexShardGatewayService.java:175)
at org.elasticsearch.index.engine.robin.RobinEngine.snapshot(RobinEngine.java:348)
at org.elasticsearch.index.shard.service.InternalIndexShard.snapshot(InternalIndexShard.java:377)
at org.elasticsearch.index.gateway.IndexShardGatewayService.snapshot(IndexShardGatewayService.java:175)
at org.elasticsearch.index.gateway.IndexShardGatewayService$SnapshotRunnable.run(IndexShardGatewayService.java:257)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:351)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:178)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:165)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:267)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:636)
Caused by: java.lang.IllegalArgumentException: maximum size for put object is 5GB
at com.google.common.base.Preconditions.checkArgument(Preconditions.java:88)
at org.jclouds.aws.s3.binders.BindS3ObjectToPayload.bindToRequest(BindS3ObjectToPayload.java:47)
at org.jclouds.rest.internal.RestAnnotationProcessor.decorateRequest(RestAnnotationProcessor.java:808)
at org.jclouds.rest.internal.RestAnnotationProcessor.createRequest(RestAnnotationProcessor.java:399)
at org.jclouds.rest.internal.AsyncRestClientProxy.createFuture(AsyncRestClientProxy.java:104)
at org.jclouds.rest.internal.AsyncRestClientProxy.invoke(AsyncRestClientProxy.java:86)
at $Proxy79.putObject(Unknown Source)
at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:616)
at org.jclouds.concurrent.internal.SyncProxy.invoke(SyncProxy.java:121)
at $Proxy80.putObject(Unknown Source)
at org.jclouds.aws.s3.blobstore.S3BlobStore.putBlob(S3BlobStore.java:234)
at org.elasticsearch.index.gateway.cloud.CloudIndexShardGateway.copyFromDirectory(CloudIndexShardGateway.java:489)
at org.elasticsearch.index.gateway.cloud.CloudIndexShardGateway.access$000(CloudIndexShardGateway.java:73)
at org.elasticsearch.index.gateway.cloud.CloudIndexShardGateway$1.run(CloudIndexShardGateway.java:203)
... 3 more

I was under the impression that ElasticSearch would manage the splitting of files so that they could happily be stored on S3, or am I reading this error wrong?

Andrew
Andrew Harvey / Developer
lexer
m/
t/ +61 2 9019 6379
w/ http://lexer.com.auhttp://lexer.com.au/
Help put an end to whaling. Visit http://www.givewhalesavoice.com.au/

Please consider the environment before printing this email
This email transmission is confidential and intended solely for the person or organisation to whom it is addressed. If you are not the intended recipient, you must not copy, distribute or disseminate the information, or take any action in relation to it and please delete this e-mail. Any views expressed in this message are those of the individual sender, except where the send specifically states them to be the views of any organisation or employer. If you have received this message in error, do not open any attachment but please notify the sender (above). This message has been checked for all known viruses powered by McAfee.

For further information visit http://www.mcafee.com/us/threat_center/default.asp
Please rely on your own virus check as no responsibility is taken by the sender for any damage rising out of any virus infection this communication may contain.

This message has been scanned for malware by Websense. www.websense.comhttp://www.websense.com/

Click herehttps://www.mailcontrol.com/sr/zhZ0ViE7NxnTndxI!oX7UnIFdu++nxPn!GMt0dTtOQ9SCZkeForBjczdfCMdFMcIygjI3nQgOK1ca1YcpG8Zdg== to report this email as spam.

Andrew Harvey / Developer
lexer

m/
t/ +61 2 9019 6379
w/ http://lexer.com.auhttp://lexer.com.au/

Help put an end to whaling. Visit www.givewhalesavoice.com.auhttp://www.givewhalesavoice.com.au/


Please consider the environment before printing this email
This email transmission is confidential and intended solely for the person or organisation to whom it is addressed. If you are not the intended recipient, you must not copy, distribute or disseminate the information, or take any action in relation to it and please delete this e-mail. Any views expressed in this message are those of the individual sender, except where the send specifically states them to be the views of any organisation or employer. If you have received this message in error, do not open any attachment but please notify the sender (above). This message has been checked for all known viruses powered by McAfee.

For further information visit http://www.mcafee.com/us/threat_center/default.asp
Please rely on your own virus check as no responsibility is taken by the sender for any damage rising out of any virus infection this communication may contain.

Andrew Harvey / Developer
lexer

m/
t/ +61 2 9019 6379
w/ http://lexer.com.au

Help put an end to whaling. Visit www.givewhalesavoice.com.auhttp://www.givewhalesavoice.com.au/


Please consider the environment before printing this email
This email transmission is confidential and intended solely for the person or organisation to whom it is addressed. If you are not the intended recipient, you must not copy, distribute or disseminate the information, or take any action in relation to it and please delete this e-mail. Any views expressed in this message are those of the individual sender, except where the send specifically states them to be the views of any organisation or employer. If you have received this message in error, do not open any attachment but please notify the sender (above). This message has been checked for all known viruses powered by McAfee.

For further information visit http://www.mcafee.com/us/threat_center/default.asp
Please rely on your own virus check as no responsibility is taken by the sender for any damage rising out of any virus infection this communication may contain.

Well, without the help of users to test things, it will be hard to move
forward ... . Strange that you still get this error. If you can spare the
time and test master before I release 0.7.2, it would be great. I have
tested the chunking aspect and it works. If not, then you can give 0.7.2 a
try once its released, if there is still a bug, then it will be fixed one
way or the other in a subsequent version.

Good luck with your startup, and I am happy that you use elasticsearch to
help you build it.

cheers,
shay.banon

On Sun, May 23, 2010 at 5:03 PM, Andrew Harvey
Andrew.Harvey@lexer.com.auwrote:

I changed my config to be the following:

gateway:
type: cloud
cloud:
chunk_size: 1.5g
container: xxx

And I'm still getting these errors. I'd love to try out the HEAD, but I
just can't afford the time at the moment (redeploying a cluster and changing
settings for our application isn't a small task) I'm prepared to throw away
the gateway data for 0.7.2, but at some point I need some level of stability
in this area.

Andrew

On 22/05/2010, at 3:38 AM, Shay Banon wrote:

Hi Andrew,

Well, took some time, but found the problem. Its in the library
elasticsearch uses to do the S3 operations. Basically, the check for the
maximum length allowed is done on the library level, and it overflows on int
:), which means that with the current version of it, you will get exceptions
above for files above 2g, and because elasticsearch uses, by default, a
chunk size of 4g (i.e. files above 4g will be chunked), you get this
problem.

For now, you can simply set gateway.cloud.chunk_size to 1.5g. This
should solve this.

cheers,
shay.banon

On Fri, May 21, 2010 at 10:02 AM, Andrew Harvey <
Andrew.Harvey@lexer.com.au> wrote:

Sure. I'm finishing up for the week (it's 5pm on friday here) but I'll
have a look at it over the weekend.

Andrew

On 21/05/2010, at 4:59 PM, Shay Banon wrote:

Strange. There is a gateway.cloud.chunk_size setting, which defaults to
4G. I have tested this and it worked, even for very small chunk sizes. Let
me run a test and see... . Can you open an issue for this?

On Fri, May 21, 2010 at 9:24 AM, Andrew Harvey <
Andrew.Harvey@lexer.com.au> wrote:

I added a new machine to my cluster of 4, running elasticsearch 0.7.1,
using the cloud plugin for gateway and discovery. I noticed a lot of these
coming through the logs:

[16:22:25,309][WARN ][index.gateway ] [Random][chatter-dev][1]
Failed to snapshot (scheduled)
org.elasticsearch.index.gateway.IndexShardGatewaySnapshotFailedException:
[chatter-dev][1] Failed to perform snapshot (index files)
at
org.elasticsearch.index.gateway.cloud.CloudIndexShardGateway.snapshot(CloudIndexShardGateway.java:218)
at
org.elasticsearch.index.gateway.IndexShardGatewayService$1.snapshot(IndexShardGatewayService.java:179)
at
org.elasticsearch.index.gateway.IndexShardGatewayService$1.snapshot(IndexShardGatewayService.java:175)
at
org.elasticsearch.index.engine.robin.RobinEngine.snapshot(RobinEngine.java:348)
at
org.elasticsearch.index.shard.service.InternalIndexShard.snapshot(InternalIndexShard.java:377)
at
org.elasticsearch.index.gateway.IndexShardGatewayService.snapshot(IndexShardGatewayService.java:175)
at
org.elasticsearch.index.gateway.IndexShardGatewayService$SnapshotRunnable.run(IndexShardGatewayService.java:257)
at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at
java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:351)
at
java.util.concurrent.FutureTask.runAndReset(FutureTask.java:178)
at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:165)
at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:267)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:636)
Caused by: java.lang.IllegalArgumentException: maximum size for put
object is 5GB
at
com.google.common.base.Preconditions.checkArgument(Preconditions.java:88)
at
org.jclouds.aws.s3.binders.BindS3ObjectToPayload.bindToRequest(BindS3ObjectToPayload.java:47)
at
org.jclouds.rest.internal.RestAnnotationProcessor.decorateRequest(RestAnnotationProcessor.java:808)
at
org.jclouds.rest.internal.RestAnnotationProcessor.createRequest(RestAnnotationProcessor.java:399)
at
org.jclouds.rest.internal.AsyncRestClientProxy.createFuture(AsyncRestClientProxy.java:104)
at
org.jclouds.rest.internal.AsyncRestClientProxy.invoke(AsyncRestClientProxy.java:86)
at $Proxy79.putObject(Unknown Source)
at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:616)
at
org.jclouds.concurrent.internal.SyncProxy.invoke(SyncProxy.java:121)
at $Proxy80.putObject(Unknown Source)
at
org.jclouds.aws.s3.blobstore.S3BlobStore.putBlob(S3BlobStore.java:234)
at
org.elasticsearch.index.gateway.cloud.CloudIndexShardGateway.copyFromDirectory(CloudIndexShardGateway.java:489)
at
org.elasticsearch.index.gateway.cloud.CloudIndexShardGateway.access$000(CloudIndexShardGateway.java:73)
at
org.elasticsearch.index.gateway.cloud.CloudIndexShardGateway$1.run(CloudIndexShardGateway.java:203)
... 3 more

I was under the impression that Elasticsearch would manage the splitting
of files so that they could happily be stored on S3, or am I reading this
error wrong?

Andrew
Andrew Harvey / Developer
lexer
m/
t/ +61 2 9019 6379
w/ http://lexer.com.au
Help put an end to whaling. Visit http://www.givewhalesavoice.com.au/


Please consider the environment before printing this email
This email transmission is confidential and intended solely for the
person or organisation to whom it is addressed. If you are not the intended
recipient, you must not copy, distribute or disseminate the information, or
take any action in relation to it and please delete this e-mail. Any views
expressed in this message are those of the individual sender, except where
the send specifically states them to be the views of any organisation or
employer. If you have received this message in error, do not open any
attachment but please notify the sender (above). This message has been
checked for all known viruses powered by McAfee.

For further information visit
Advanced Research Center | Trellix
Please rely on your own virus check as no responsibility is taken by the
sender for any damage rising out of any virus infection this communication
may contain.

This message has been scanned for malware by Websense. www.websense.com

Click herehttps://www.mailcontrol.com/sr/zhZ0ViE7NxnTndxI!oX7UnIFdu++nxPn!GMt0dTtOQ9SCZkeForBjczdfCMdFMcIygjI3nQgOK1ca1YcpG8Zdg==to report this email as spam.

Andrew Harvey / Developer lexer

m/ t/ +61 2 9019 6379 w/ http://lexer.com.au

Help put an end to whaling. Visit www.givewhalesavoice.com.au

Please consider the environment before printing this email
This email transmission is confidential and intended solely for the person
or organisation to whom it is addressed. If you are not the intended
recipient, you must not copy, distribute or disseminate the information, or
take any action in relation to it and please delete this e-mail. Any views
expressed in this message are those of the individual sender, except where
the send specifically states them to be the views of any organisation or
employer. If you have received this message in error, do not open any
attachment but please notify the sender (above). This message has been
checked for all known viruses powered by McAfee.

For further information visit
Advanced Research Center | Trellix
Please rely on your own virus check as no responsibility is taken by the
sender for any damage rising out of any virus infection this communication
may contain.

Andrew Harvey / Developer lexer

m/ t/ +61 2 9019 6379 w/ http://lexer.com.au

Help put an end to whaling. Visit www.givewhalesavoice.com.au

Please consider the environment before printing this email
This email transmission is confidential and intended solely for the person
or organisation to whom it is addressed. If you are not the intended
recipient, you must not copy, distribute or disseminate the information, or
take any action in relation to it and please delete this e-mail. Any views
expressed in this message are those of the individual sender, except where
the send specifically states them to be the views of any organisation or
employer. If you have received this message in error, do not open any
attachment but please notify the sender (above). This message has been
checked for all known viruses powered by McAfee.

For further information visit
Advanced Research Center | Trellix
Please rely on your own virus check as no responsibility is taken by the
sender for any damage rising out of any virus infection this communication
may contain.

It's not that I'm unwilling to test things, it's just difficult with the operational requirements I'm under. I can give you the best data I can from the the releases I'm using, but it's hard to take time out to try out HEAD, considering that these problems tend to only rear their heads under production-like load. I was kind of hoping that I had incorrectly changed my configuration file and that was why I was continuing to get the error.

In short, anything I can do with my currently running cluster, I'd be glad to help out with (I can even restart a node with DEBUG logging or something, I just can't change versions at will). I have flagged setting up a cluster to test master on as something to try and spend some time on when I get a spare moment, but unfortunately I don't hold out great hope of many spare moments in the near future. I don't wish to come across as ungrateful, ElasticSearch has been a terrific win for the work that I'm doing, but as you must understand, time and resources are finite, but the things to spend them on are infinite.

Thanks again, looking forward to 0.7.2.

Andrew

On 24/05/2010, at 12:09 AM, Shay Banon wrote:

Well, without the help of users to test things, it will be hard to move forward ... . Strange that you still get this error. If you can spare the time and test master before I release 0.7.2, it would be great. I have tested the chunking aspect and it works. If not, then you can give 0.7.2 a try once its released, if there is still a bug, then it will be fixed one way or the other in a subsequent version.

Good luck with your startup, and I am happy that you use elasticsearch to help you build it.

cheers,
shay.banon

On Sun, May 23, 2010 at 5:03 PM, Andrew Harvey <Andrew.Harvey@lexer.com.aumailto:Andrew.Harvey@lexer.com.au> wrote:

I changed my config to be the following:

gateway:
type: cloud
cloud:
chunk_size: 1.5g
container: xxx

And I'm still getting these errors. I'd love to try out the HEAD, but I just can't afford the time at the moment (redeploying a cluster and changing settings for our application isn't a small task) I'm prepared to throw away the gateway data for 0.7.2, but at some point I need some level of stability in this area.

Andrew

On 22/05/2010, at 3:38 AM, Shay Banon wrote:

Hi Andrew,

Well, took some time, but found the problem. Its in the library elasticsearch uses to do the S3 operations. Basically, the check for the maximum length allowed is done on the library level, and it overflows on int :), which means that with the current version of it, you will get exceptions above for files above 2g, and because elasticsearch uses, by default, a chunk size of 4g (i.e. files above 4g will be chunked), you get this problem.

For now, you can simply set gateway.cloud.chunk_size to 1.5g. This should solve this.

cheers,
shay.banon

On Fri, May 21, 2010 at 10:02 AM, Andrew Harvey <Andrew.Harvey@lexer.com.aumailto:Andrew.Harvey@lexer.com.au> wrote:

Sure. I'm finishing up for the week (it's 5pm on friday here) but I'll have a look at it over the weekend.

Andrew

On 21/05/2010, at 4:59 PM, Shay Banon wrote:

Strange. There is a gateway.cloud.chunk_size setting, which defaults to 4G. I have tested this and it worked, even for very small chunk sizes. Let me run a test and see... . Can you open an issue for this?

On Fri, May 21, 2010 at 9:24 AM, Andrew Harvey <Andrew.Harvey@lexer.com.aumailto:Andrew.Harvey@lexer.com.au> wrote:
I added a new machine to my cluster of 4, running elasticsearch 0.7.1, using the cloud plugin for gateway and discovery. I noticed a lot of these coming through the logs:

[16:22:25,309][WARN ][index.gateway ] [Random][chatter-dev][1] Failed to snapshot (scheduled)
org.elasticsearch.index.gateway.IndexShardGatewaySnapshotFailedException: [chatter-dev][1] Failed to perform snapshot (index files)
at org.elasticsearch.index.gateway.cloud.CloudIndexShardGateway.snapshot(CloudIndexShardGateway.java:218)
at org.elasticsearch.index.gateway.IndexShardGatewayService$1.snapshot(IndexShardGatewayService.java:179)
at org.elasticsearch.index.gateway.IndexShardGatewayService$1.snapshot(IndexShardGatewayService.java:175)
at org.elasticsearch.index.engine.robin.RobinEngine.snapshot(RobinEngine.java:348)
at org.elasticsearch.index.shard.service.InternalIndexShard.snapshot(InternalIndexShard.java:377)
at org.elasticsearch.index.gateway.IndexShardGatewayService.snapshot(IndexShardGatewayService.java:175)
at org.elasticsearch.index.gateway.IndexShardGatewayService$SnapshotRunnable.run(IndexShardGatewayService.java:257)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:351)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:178)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:165)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:267)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:636)
Caused by: java.lang.IllegalArgumentException: maximum size for put object is 5GB
at com.google.common.base.Preconditions.checkArgument(Preconditions.java:88)
at org.jclouds.aws.s3.binders.BindS3ObjectToPayload.bindToRequest(BindS3ObjectToPayload.java:47)
at org.jclouds.rest.internal.RestAnnotationProcessor.decorateRequest(RestAnnotationProcessor.java:808)
at org.jclouds.rest.internal.RestAnnotationProcessor.createRequest(RestAnnotationProcessor.java:399)
at org.jclouds.rest.internal.AsyncRestClientProxy.createFuture(AsyncRestClientProxy.java:104)
at org.jclouds.rest.internal.AsyncRestClientProxy.invoke(AsyncRestClientProxy.java:86)
at $Proxy79.putObject(Unknown Source)
at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:616)
at org.jclouds.concurrent.internal.SyncProxy.invoke(SyncProxy.java:121)
at $Proxy80.putObject(Unknown Source)
at org.jclouds.aws.s3.blobstore.S3BlobStore.putBlob(S3BlobStore.java:234)
at org.elasticsearch.index.gateway.cloud.CloudIndexShardGateway.copyFromDirectory(CloudIndexShardGateway.java:489)
at org.elasticsearch.index.gateway.cloud.CloudIndexShardGateway.access$000(CloudIndexShardGateway.java:73)
at org.elasticsearch.index.gateway.cloud.CloudIndexShardGateway$1.run(CloudIndexShardGateway.java:203)
... 3 more

I was under the impression that ElasticSearch would manage the splitting of files so that they could happily be stored on S3, or am I reading this error wrong?

Andrew
Andrew Harvey / Developer
lexer
m/
t/ +61 2 9019 6379
w/ http://lexer.com.auhttp://lexer.com.au/
Help put an end to whaling. Visit http://www.givewhalesavoice.com.au/

Please consider the environment before printing this email
This email transmission is confidential and intended solely for the person or organisation to whom it is addressed. If you are not the intended recipient, you must not copy, distribute or disseminate the information, or take any action in relation to it and please delete this e-mail. Any views expressed in this message are those of the individual sender, except where the send specifically states them to be the views of any organisation or employer. If you have received this message in error, do not open any attachment but please notify the sender (above). This message has been checked for all known viruses powered by McAfee.

For further information visit http://www.mcafee.com/us/threat_center/default.asp
Please rely on your own virus check as no responsibility is taken by the sender for any damage rising out of any virus infection this communication may contain.

This message has been scanned for malware by Websense. www.websense.comhttp://www.websense.com/

Click herehttps://www.mailcontrol.com/sr/zhZ0ViE7NxnTndxI!oX7UnIFdu++nxPn!GMt0dTtOQ9SCZkeForBjczdfCMdFMcIygjI3nQgOK1ca1YcpG8Zdg== to report this email as spam.

Andrew Harvey / Developer
lexer

m/
t/ +61 2 9019 6379
w/ http://lexer.com.auhttp://lexer.com.au/

Help put an end to whaling. Visit www.givewhalesavoice.com.auhttp://www.givewhalesavoice.com.au/


Please consider the environment before printing this email
This email transmission is confidential and intended solely for the person or organisation to whom it is addressed. If you are not the intended recipient, you must not copy, distribute or disseminate the information, or take any action in relation to it and please delete this e-mail. Any views expressed in this message are those of the individual sender, except where the send specifically states them to be the views of any organisation or employer. If you have received this message in error, do not open any attachment but please notify the sender (above). This message has been checked for all known viruses powered by McAfee.

For further information visit http://www.mcafee.com/us/threat_center/default.asp
Please rely on your own virus check as no responsibility is taken by the sender for any damage rising out of any virus infection this communication may contain.

Andrew Harvey / Developer
lexer

m/
t/ +61 2 9019 6379
w/ http://lexer.com.auhttp://lexer.com.au/

Help put an end to whaling. Visit www.givewhalesavoice.com.auhttp://www.givewhalesavoice.com.au/


Please consider the environment before printing this email
This email transmission is confidential and intended solely for the person or organisation to whom it is addressed. If you are not the intended recipient, you must not copy, distribute or disseminate the information, or take any action in relation to it and please delete this e-mail. Any views expressed in this message are those of the individual sender, except where the send specifically states them to be the views of any organisation or employer. If you have received this message in error, do not open any attachment but please notify the sender (above). This message has been checked for all known viruses powered by McAfee.

For further information visit http://www.mcafee.com/us/threat_center/default.asp
Please rely on your own virus check as no responsibility is taken by the sender for any damage rising out of any virus infection this communication may contain.

Andrew Harvey / Developer
lexer

m/
t/ +61 2 9019 6379
w/ http://lexer.com.au

Help put an end to whaling. Visit www.givewhalesavoice.com.auhttp://www.givewhalesavoice.com.au/


Please consider the environment before printing this email
This email transmission is confidential and intended solely for the person or organisation to whom it is addressed. If you are not the intended recipient, you must not copy, distribute or disseminate the information, or take any action in relation to it and please delete this e-mail. Any views expressed in this message are those of the individual sender, except where the send specifically states them to be the views of any organisation or employer. If you have received this message in error, do not open any attachment but please notify the sender (above). This message has been checked for all known viruses powered by McAfee.

For further information visit http://www.mcafee.com/us/threat_center/default.asp
Please rely on your own virus check as no responsibility is taken by the sender for any damage rising out of any virus infection this communication may contain.

Hi Andrew,

No problem. I ran a another test, and the value I gave you was wrong
(overflowing on ints computation is annoying :slight_smile: ). The maximum value that
you should set until the next release of the s3 library is 1g. In any case,
this is the default value that I placed in 0.7.2, since chunks are loaded in
parallel to the cloud store in 0.7.2.

cheers,
shay.banon

On Sun, May 23, 2010 at 5:17 PM, Andrew Harvey
Andrew.Harvey@lexer.com.auwrote:

It's not that I'm unwilling to test things, it's just difficult with the
operational requirements I'm under. I can give you the best data I can from
the the releases I'm using, but it's hard to take time out to try out HEAD,
considering that these problems tend to only rear their heads under
production-like load. I was kind of hoping that I had incorrectly changed my
configuration file and that was why I was continuing to get the error.

In short, anything I can do with my currently running cluster, I'd be glad
to help out with (I can even restart a node with DEBUG logging or something,
I just can't change versions at will). I have flagged setting up a cluster
to test master on as something to try and spend some time on when I get a
spare moment, but unfortunately I don't hold out great hope of many spare
moments in the near future. I don't wish to come across as ungrateful,
Elasticsearch has been a terrific win for the work that I'm doing, but as
you must understand, time and resources are finite, but the things to spend
them on are infinite.

Thanks again, looking forward to 0.7.2.

Andrew

On 24/05/2010, at 12:09 AM, Shay Banon wrote:

Well, without the help of users to test things, it will be hard to move
forward ... . Strange that you still get this error. If you can spare the
time and test master before I release 0.7.2, it would be great. I have
tested the chunking aspect and it works. If not, then you can give 0.7.2 a
try once its released, if there is still a bug, then it will be fixed one
way or the other in a subsequent version.

Good luck with your startup, and I am happy that you use elasticsearch to
help you build it.

cheers,
shay.banon

On Sun, May 23, 2010 at 5:03 PM, Andrew Harvey <Andrew.Harvey@lexer.com.au

wrote:

I changed my config to be the following:

gateway:
type: cloud
cloud:
chunk_size: 1.5g
container: xxx

And I'm still getting these errors. I'd love to try out the HEAD, but I
just can't afford the time at the moment (redeploying a cluster and changing
settings for our application isn't a small task) I'm prepared to throw away
the gateway data for 0.7.2, but at some point I need some level of stability
in this area.

Andrew

On 22/05/2010, at 3:38 AM, Shay Banon wrote:

Hi Andrew,

Well, took some time, but found the problem. Its in the library
elasticsearch uses to do the S3 operations. Basically, the check for the
maximum length allowed is done on the library level, and it overflows on int
:), which means that with the current version of it, you will get exceptions
above for files above 2g, and because elasticsearch uses, by default, a
chunk size of 4g (i.e. files above 4g will be chunked), you get this
problem.

For now, you can simply set gateway.cloud.chunk_size to 1.5g. This
should solve this.

cheers,
shay.banon

On Fri, May 21, 2010 at 10:02 AM, Andrew Harvey <
Andrew.Harvey@lexer.com.au> wrote:

Sure. I'm finishing up for the week (it's 5pm on friday here) but I'll
have a look at it over the weekend.

Andrew

On 21/05/2010, at 4:59 PM, Shay Banon wrote:

Strange. There is a gateway.cloud.chunk_size setting, which defaults to
4G. I have tested this and it worked, even for very small chunk sizes. Let
me run a test and see... . Can you open an issue for this?

On Fri, May 21, 2010 at 9:24 AM, Andrew Harvey <
Andrew.Harvey@lexer.com.au> wrote:

I added a new machine to my cluster of 4, running elasticsearch 0.7.1,
using the cloud plugin for gateway and discovery. I noticed a lot of these
coming through the logs:

[16:22:25,309][WARN ][index.gateway ]
[Random][chatter-dev][1] Failed to snapshot (scheduled)
org.elasticsearch.index.gateway.IndexShardGatewaySnapshotFailedException:
[chatter-dev][1] Failed to perform snapshot (index files)
at
org.elasticsearch.index.gateway.cloud.CloudIndexShardGateway.snapshot(CloudIndexShardGateway.java:218)
at
org.elasticsearch.index.gateway.IndexShardGatewayService$1.snapshot(IndexShardGatewayService.java:179)
at
org.elasticsearch.index.gateway.IndexShardGatewayService$1.snapshot(IndexShardGatewayService.java:175)
at
org.elasticsearch.index.engine.robin.RobinEngine.snapshot(RobinEngine.java:348)
at
org.elasticsearch.index.shard.service.InternalIndexShard.snapshot(InternalIndexShard.java:377)
at
org.elasticsearch.index.gateway.IndexShardGatewayService.snapshot(IndexShardGatewayService.java:175)
at
org.elasticsearch.index.gateway.IndexShardGatewayService$SnapshotRunnable.run(IndexShardGatewayService.java:257)
at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at
java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:351)
at
java.util.concurrent.FutureTask.runAndReset(FutureTask.java:178)
at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:165)
at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:267)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:636)
Caused by: java.lang.IllegalArgumentException: maximum size for put
object is 5GB
at
com.google.common.base.Preconditions.checkArgument(Preconditions.java:88)
at
org.jclouds.aws.s3.binders.BindS3ObjectToPayload.bindToRequest(BindS3ObjectToPayload.java:47)
at
org.jclouds.rest.internal.RestAnnotationProcessor.decorateRequest(RestAnnotationProcessor.java:808)
at
org.jclouds.rest.internal.RestAnnotationProcessor.createRequest(RestAnnotationProcessor.java:399)
at
org.jclouds.rest.internal.AsyncRestClientProxy.createFuture(AsyncRestClientProxy.java:104)
at
org.jclouds.rest.internal.AsyncRestClientProxy.invoke(AsyncRestClientProxy.java:86)
at $Proxy79.putObject(Unknown Source)
at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:616)
at
org.jclouds.concurrent.internal.SyncProxy.invoke(SyncProxy.java:121)
at $Proxy80.putObject(Unknown Source)
at
org.jclouds.aws.s3.blobstore.S3BlobStore.putBlob(S3BlobStore.java:234)
at
org.elasticsearch.index.gateway.cloud.CloudIndexShardGateway.copyFromDirectory(CloudIndexShardGateway.java:489)
at
org.elasticsearch.index.gateway.cloud.CloudIndexShardGateway.access$000(CloudIndexShardGateway.java:73)
at
org.elasticsearch.index.gateway.cloud.CloudIndexShardGateway$1.run(CloudIndexShardGateway.java:203)
... 3 more

I was under the impression that Elasticsearch would manage the splitting
of files so that they could happily be stored on S3, or am I reading this
error wrong?

Andrew
Andrew Harvey / Developer
lexer
m/
t/ +61 2 9019 6379
w/ http://lexer.com.au
Help put an end to whaling. Visit http://www.givewhalesavoice.com.au/


Please consider the environment before printing this email
This email transmission is confidential and intended solely for the
person or organisation to whom it is addressed. If you are not the intended
recipient, you must not copy, distribute or disseminate the information, or
take any action in relation to it and please delete this e-mail. Any views
expressed in this message are those of the individual sender, except where
the send specifically states them to be the views of any organisation or
employer. If you have received this message in error, do not open any
attachment but please notify the sender (above). This message has been
checked for all known viruses powered by McAfee.

For further information visit
Advanced Research Center | Trellix
Please rely on your own virus check as no responsibility is taken by the
sender for any damage rising out of any virus infection this communication
may contain.

This message has been scanned for malware by Websense. www.websense.com

Click herehttps://www.mailcontrol.com/sr/zhZ0ViE7NxnTndxI!oX7UnIFdu++nxPn!GMt0dTtOQ9SCZkeForBjczdfCMdFMcIygjI3nQgOK1ca1YcpG8Zdg==to report this email as spam.

Andrew Harvey / Developer lexer

m/ t/ +61 2 9019 6379 w/ http://lexer.com.au

Help put an end to whaling. Visit www.givewhalesavoice.com.au

Please consider the environment before printing this email
This email transmission is confidential and intended solely for the
person or organisation to whom it is addressed. If you are not the intended
recipient, you must not copy, distribute or disseminate the information, or
take any action in relation to it and please delete this e-mail. Any views
expressed in this message are those of the individual sender, except where
the send specifically states them to be the views of any organisation or
employer. If you have received this message in error, do not open any
attachment but please notify the sender (above). This message has been
checked for all known viruses powered by McAfee.

For further information visit
Advanced Research Center | Trellix
Please rely on your own virus check as no responsibility is taken by the
sender for any damage rising out of any virus infection this communication
may contain.

Andrew Harvey / Developer lexer

m/ t/ +61 2 9019 6379 w/ http://lexer.com.au

Help put an end to whaling. Visit www.givewhalesavoice.com.au

Please consider the environment before printing this email
This email transmission is confidential and intended solely for the person
or organisation to whom it is addressed. If you are not the intended
recipient, you must not copy, distribute or disseminate the information, or
take any action in relation to it and please delete this e-mail. Any views
expressed in this message are those of the individual sender, except where
the send specifically states them to be the views of any organisation or
employer. If you have received this message in error, do not open any
attachment but please notify the sender (above). This message has been
checked for all known viruses powered by McAfee.

For further information visit
Advanced Research Center | Trellix
Please rely on your own virus check as no responsibility is taken by the
sender for any damage rising out of any virus infection this communication
may contain.

Andrew Harvey / Developer lexer

m/ t/ +61 2 9019 6379 w/ http://lexer.com.au

Help put an end to whaling. Visit www.givewhalesavoice.com.au

Please consider the environment before printing this email
This email transmission is confidential and intended solely for the person
or organisation to whom it is addressed. If you are not the intended
recipient, you must not copy, distribute or disseminate the information, or
take any action in relation to it and please delete this e-mail. Any views
expressed in this message are those of the individual sender, except where
the send specifically states them to be the views of any organisation or
employer. If you have received this message in error, do not open any
attachment but please notify the sender (above). This message has been
checked for all known viruses powered by McAfee.

For further information visit
Advanced Research Center | Trellix
Please rely on your own virus check as no responsibility is taken by the
sender for any damage rising out of any virus infection this communication
may contain.