Snapshot compress not compressing?

I am playing around with snapshot/restore and have a local 1.3.2 cluster
running on Mac OS X with 894MB of index data.

I have registered a backup repository like so (straight from the docs):

curl -XPUT 'http://localhost:9200/_snapshot/my_backup' -d '{
"type": "fs",
"settings": {
"location": "/tmp/backups/my_backup",
"compress": true
}
}'

Then run the snapshot (again straight from the docs):

curl -XPUT
"localhost:9200/_snapshot/my_backup/snapshot_1?wait_for_completion=true"

The snapshot runs fine, but the backup directory that is generated is
890MB, which tells me that compression isn't kicking in. When I set
compress: false, I get the same results.

If I tar/gz that directory it gets squashed down to 204MB. I'd expect the
compressed snapshot from ES to be somewhere in that ballpark.

Am I doing something wrong or is there a bug?
Thanks and Best Regards,
Paul

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/93e37247-58b4-4d08-bef9-8de0146cf979%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Good morning,

I experienced the exact same issue on Friday as well.

I have an Elastic Search cluster (1.3.2) running on Windows using Oracle
Java 1.7.0_67. We needed a backup strategy and purposely upgraded to this
version to take advantage of the snapshots feature.
The size of the indexes in the cluster is about 40Gb and even with the
'compress' option explicitly set to true (as in Paul's post and in the
documentation) the snapshot is about 40Gb,

Is there a work around to get this working or some other fix?

Thanks, Russell

On Friday, 5 September 2014 22:27:09 UTC+1, ppearcy wrote:

I am playing around with snapshot/restore and have a local 1.3.2 cluster
running on Mac OS X with 894MB of index data.

I have registered a backup repository like so (straight from the docs):

curl -XPUT 'http://localhost:9200/_snapshot/my_backup' -d '{
"type": "fs",
"settings": {
"location": "/tmp/backups/my_backup",
"compress": true
}
}'

Then run the snapshot (again straight from the docs):

curl -XPUT
"localhost:9200/_snapshot/my_backup/snapshot_1?wait_for_completion=true"

The snapshot runs fine, but the backup directory that is generated is
890MB, which tells me that compression isn't kicking in. When I set
compress: false, I get the same results.

If I tar/gz that directory it gets squashed down to 204MB. I'd expect the
compressed snapshot from ES to be somewhere in that ballpark.

Am I doing something wrong or is there a bug?
Thanks and Best Regards,
Paul

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/0c02d519-efc8-4dff-91a8-068998052187%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

At the moment, compression is applied only to metadata files (index mapping
and settings basically). Data files are not compressed.

On Monday, September 8, 2014 5:22:09 AM UTC-4, Russell Seymour wrote:

Good morning,

I experienced the exact same issue on Friday as well.

I have an Elastic Search cluster (1.3.2) running on Windows using Oracle
Java 1.7.0_67. We needed a backup strategy and purposely upgraded to this
version to take advantage of the snapshots feature.
The size of the indexes in the cluster is about 40Gb and even with the
'compress' option explicitly set to true (as in Paul's post and in the
documentation) the snapshot is about 40Gb,

Is there a work around to get this working or some other fix?

Thanks, Russell

On Friday, 5 September 2014 22:27:09 UTC+1, ppearcy wrote:

I am playing around with snapshot/restore and have a local 1.3.2 cluster
running on Mac OS X with 894MB of index data.

I have registered a backup repository like so (straight from the docs):

curl -XPUT 'http://localhost:9200/_snapshot/my_backup' -d '{
"type": "fs",
"settings": {
"location": "/tmp/backups/my_backup",
"compress": true
}
}'

Then run the snapshot (again straight from the docs):

curl -XPUT
"localhost:9200/_snapshot/my_backup/snapshot_1?wait_for_completion=true"

The snapshot runs fine, but the backup directory that is generated is
890MB, which tells me that compression isn't kicking in. When I set
compress: false, I get the same results.

If I tar/gz that directory it gets squashed down to 204MB. I'd expect the
compressed snapshot from ES to be somewhere in that ballpark.

Am I doing something wrong or is there a bug?
Thanks and Best Regards,
Paul

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/ab00335a-8216-45df-ab8e-09b33086ecaf%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Hehe, good to know. I submitted a PR to clarify the documentation:

The "at the moment" leads me to believe this is planned or in the pipeline,
looking forward to it.

Best Regards,
Paul

On Monday, September 8, 2014 2:00:30 PM UTC-4, Igor Motov wrote:

At the moment, compression is applied only to metadata files (index
mapping and settings basically). Data files are not compressed.

On Monday, September 8, 2014 5:22:09 AM UTC-4, Russell Seymour wrote:

Good morning,

I experienced the exact same issue on Friday as well.

I have an Elastic Search cluster (1.3.2) running on Windows using Oracle
Java 1.7.0_67. We needed a backup strategy and purposely upgraded to this
version to take advantage of the snapshots feature.
The size of the indexes in the cluster is about 40Gb and even with the
'compress' option explicitly set to true (as in Paul's post and in the
documentation) the snapshot is about 40Gb,

Is there a work around to get this working or some other fix?

Thanks, Russell

On Friday, 5 September 2014 22:27:09 UTC+1, ppearcy wrote:

I am playing around with snapshot/restore and have a local 1.3.2 cluster
running on Mac OS X with 894MB of index data.

I have registered a backup repository like so (straight from the docs):

curl -XPUT 'http://localhost:9200/_snapshot/my_backup' -d '{
"type": "fs",
"settings": {
"location": "/tmp/backups/my_backup",
"compress": true
}
}'

Then run the snapshot (again straight from the docs):

curl -XPUT
"localhost:9200/_snapshot/my_backup/snapshot_1?wait_for_completion=true"

The snapshot runs fine, but the backup directory that is generated is
890MB, which tells me that compression isn't kicking in. When I set
compress: false, I get the same results.

If I tar/gz that directory it gets squashed down to 204MB. I'd expect
the compressed snapshot from ES to be somewhere in that ballpark.

Am I doing something wrong or is there a bug?
Thanks and Best Regards,
Paul

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/e71eb039-3f13-4417-8427-eace733bca53%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.