Full backup of a single node


#1

Topic says it...I have a single node running...I just want to do a full backup to a file before I do an upgrade. I've been poking around the list here, but haven't seen what I'm looking for...I did see the Knapsack plugin, but not sure if that's what I need, and elasticsearch-dump has WAY to many dependencies for node. Any other options? Thank you.


(Magnus Bäck) #2

Have you considered the snapshot/restore feature?


#3

I have been looking at that....just not sure exactly how to go about doing it (which explains my other post this morning :slight_smile: ) Thanks Magnus.


#4

Well I tried this, and I'm not having any luck. I've added:

path.repo: ["/media/backup/es"]

to /etc/elasticsearch/elasticsearch.yml. I then run this:

curl -XPUT 'http://localhost:9200/_snapshot/my_backup' -d '{
    "type": "fs",
    "settings": {
        "location": "/media/backup/es/my_backup",
        "compress": true
    }
}'

here's what I get:

{"error":"RepositoryException[[my_backup] failed to create repository]; nested: CreationException[Guice creation errors:\n\n1) Error injecting constructor, org.elasticsearch.common.blobstore.BlobStoreException: Failed to create directory at [/media/backup/es/my_backup]\n  at org.elasticsearch.repositories.fs.FsRepository.<init>(Unknown Source)\n  while locating org.elasticsearch.repositories.fs.FsRepository\n  while locating org.elasticsearch.repositories.Repository\n\n1 error]; nested: BlobStoreException[Failed to create directory at [/media/backup/es/my_backup]]; ","status":500}

and from my elasticsearch.log:

2015-07-23 19:29:20,335][WARN ][repositories             ] [Dionysus] failed to create repository [my_backup]
org.elasticsearch.repositories.RepositoryException: [my_backup] failed to create repository
        at org.elasticsearch.repositories.RepositoriesService.createRepositoryHolder(RepositoriesService.java:414)
        at org.elasticsearch.repositories.RepositoriesService.registerRepository(RepositoriesService.java:371)
        at org.elasticsearch.repositories.RepositoriesService.access$100(RepositoriesService.java:55)
        at org.elasticsearch.repositories.RepositoriesService$1.execute(RepositoriesService.java:110)
        at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:374)
        at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:188)
        at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:158)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
Caused by: org.elasticsearch.common.inject.CreationException: Guice creation errors:

1) Error injecting constructor, org.elasticsearch.common.blobstore.BlobStoreException: Failed to create directory at [/media/backup/es/my_backup]
  at org.elasticsearch.repositories.fs.FsRepository.<init>(Unknown Source)
  while locating org.elasticsearch.repositories.fs.FsRepository
  while locating org.elasticsearch.repositories.Repository

1 error
        at org.elasticsearch.common.inject.internal.Errors.throwCreationExceptionIfErrorsExist(Errors.java:344)
        at org.elasticsearch.common.inject.InjectorBuilder.injectDynamically(InjectorBuilder.java:178)
        at org.elasticsearch.common.inject.InjectorBuilder.build(InjectorBuilder.java:110)
        at org.elasticsearch.common.inject.InjectorImpl.createChildInjector(InjectorImpl.java:131)
        at org.elasticsearch.common.inject.ModulesBuilder.createChildInjector(ModulesBuilder.java:69)
        at org.elasticsearch.repositories.RepositoriesService.createRepositoryHolder(RepositoriesService.java:407)
        ... 9 more
Caused by: org.elasticsearch.common.blobstore.BlobStoreException: Failed to create directory at [/media/backup/es/my_backup]
        at org.elasticsearch.common.blobstore.fs.FsBlobStore.<init>(FsBlobStore.java:49)
        at org.elasticsearch.repositories.fs.FsRepository.<init>(FsRepository.java:88)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
        at org.elasticsearch.common.inject.DefaultConstructionProxyFactory$1.newInstance(DefaultConstructionProxyFactory.java:54)
        at org.elasticsearch.common.inject.ConstructorInjector.construct(ConstructorInjector.java:86)
        at org.elasticsearch.common.inject.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:98)
        at org.elasticsearch.common.inject.FactoryProxy.get(FactoryProxy.java:52)
        at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:45)
        at org.elasticsearch.common.inject.InjectorImpl.callInContext(InjectorImpl.java:837)
        at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:42)
        at org.elasticsearch.common.inject.Scopes$1$1.get(Scopes.java:57)
        at org.elasticsearch.common.inject.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:45)
        at org.elasticsearch.common.inject.InjectorBuilder$1.call(InjectorBuilder.java:200)
        at org.elasticsearch.common.inject.InjectorBuilder$1.call(InjectorBuilder.java:193)
        at org.elasticsearch.common.inject.InjectorImpl.callInContext(InjectorImpl.java:830)
        at org.elasticsearch.common.inject.InjectorBuilder.loadEagerSingletons(InjectorBuilder.java:193)
        at org.elasticsearch.common.inject.InjectorBuilder.injectDynamically(InjectorBuilder.java:175)

I can verify that ES is up and working...in fact I've completely blown out /var/lib/elasticsearch/elasticsearch and started again

[19:44:48 dev:~$] curl 'http://localhost:9200'
{
  "status" : 200,
  "name" : "Suicide",
  "cluster_name" : "elasticsearch",
  "version" : {
    "number" : "1.7.0",
    "build_hash" : "929b9739cae115e73c346cb5f9a6f24ba735a743",
    "build_timestamp" : "2015-07-16T14:31:07Z",
    "build_snapshot" : false,
    "lucene_version" : "4.10.4"
  },
  "tagline" : "You Know, for Search"
}

Not sure where to go from here. Thank you.


(Magnus Bäck) #5

Does the user that ES runs as have permission to create /media/backup/es/my_backup?


#6

Argh...I was sure I set that dir to 0777 :disappointed: Thanks for the catch Magnus...here's what I have now:

21:47:53 :~$] ./testsnapshot
<HTML>
<HEAD><TITLE>Redirection</TITLE></HEAD>
<BODY><H1>Redirect</H1></BODY>

[21:47:55 :~$] cat testsnapshot
curl -XPUT 'http://localhost:9200/_snapshot/my_backup' -d '{
    "type": "fs",
    "settings": {
        "location": "/media/backup/es/my_backup",
        "compress": true
    }
}'

[21:48:07 :~$] curl 'http://localhost:9200/_snapshot/_all'
{}

[21:47:58 :~$] curl 'http://localhost:9200/_snapshot/my_backup'
{"error":"RepositoryMissingException[[my_backup] missing]","status":404}

do I need to create /media/backup/es/my_backup? Thank you.


(Magnus Bäck) #7

Why is your PUT /_snapshot/my_backup returning HTML and (seemingly) a redirection response? ES doesn't behave that way. Do you have a proxy configured or something?


#8

OMG if I get any dumber I'll have an accident :flushed:. Yea that was it...running my XPUT I now get:

{"acknowledged":true}

curl 'http://localhost:9200/_snapshot/_all'
{"my_backup":{"type":"fs","settings":{"compress":"true","location":"/media/backup/es/my_backup"}}}

And now when I run:

curl -XPUT http://localhost:9200/_snapshot/my_backup/snapshot_1?wait_for_completion=true

I get:

{"snapshot":{"snapshot":"snapshot_1","version_id":1070099,"version":"1.7.0","indices":[],"state":"SUCCESS","start_time":"2015-07-24T17:05:14.238Z","start_time_in_millis":1437757514238,"end_time":"2015-07-24T17:05:14.322Z","end_time_in_millis":1437757514322,"duration_in_millis":84,"failures":[],"shards":{"total":0,"failed":0,"successful":0}}}

Which I'll count as a win. This has been in my development setup...so I'm now going to try with my production side. I'll post my results here...thanks again Magnus.


#9

Well this was an interesting exercise. First off, my data directory is /var/lib/elasticsearch. So, here's what I did in my tests:

curl -XPUT 'http://localhost:9200/_snapshot/my_backup' -d '{
    "type": "fs",
    "settings": {
        "location": "/media/backup/es/my_backup",
        "compress": true
    }
}'

take snapshot

curl -XPUT http://localhost:9200/_snapshot/my_backup/snapshot_1?wait_for_completion=true

at this point I verified the snapshot, stopped the es service then deleted /var/lib/elasticsearch/elasticsearch

I then attempted to restore the snapshot with:

curl -XPOST http://localhost:9200/_snapshot/my_backup/snapshot_1/_restore

this gave me:

{"error":"RepositoryMissingException[[my_backup] missing]","status":404}

yet the dir shows:

[18:29:11 dev:/media/backup/es/my_backup$] ls
index  indices  metadata-snapshot_1  snapshot-snapshot_1

So....long story short, if this is a single node, like mine is, it appears that just archiving /var/lib/elasticsearch is what I will do...I only need to do this to make sure that when I do my upgrade from 1.4.* to 1.7, I won't loose any data. Thanks again for all your help Magnus.


(Magnus Bäck) #10

If you wipe /var/lib/elasticsearch you delete all state kept by your ES node (and with just one node all of the cluster is gone). This includes the knowledge of your repository. You can restore a snapshot from your repository by re-adding it to your recreated cluster.


#11

Ah that's awesome Magnus...I'll use that for future reference...thanks so much :smile:


(system) #12