Copy index

Hi

I can see there are lots of utilities to copy the contents of an index such as
elasticdump
reindexer
streames
etc

And they mostly use scan scroll.

Is there a single curl command to copy an index to a new index?

Without too much investigation it looks like scan scroll requires repeated calls?

Can you please confirm?

If this is the case what is the simplest supported utility?

Alternatively is there a plugin with front end to choose from and to index?

Thanks in advance

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/1caeebf5-44de-4eba-ad5a-c702461bf3d2%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Have you tried taking a snapshot
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/modules-snapshots.html#_snapshotand
restoring
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/modules-snapshots.html#_restorethe
index to a new name (see rename_pattern)?

On Thursday, October 16, 2014 12:55:02 PM UTC-5, eune...@gmail.com wrote:

Hi

I can see there are lots of utilities to copy the contents of an index
such as
elasticdump
reindexer
streames
etc

And they mostly use scan scroll.

Is there a single curl command to copy an index to a new index?

Without too much investigation it looks like scan scroll requires repeated
calls?

Can you please confirm?

If this is the case what is the simplest supported utility?

Alternatively is there a plugin with front end to choose from and to
index?

Thanks in advance

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/52203c52-f1d8-4770-9f01-1fce905d26f2%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

I should have mentioned:

The point is to copy the data only

And then to change the mappings

Snapshot no use sorry because that brings the mappings

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/6ba216e5-f409-445c-b278-d306f252d022%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

You can use the knapsack plugin for export/import data and change mappings
(and much more!)

For a 1:1 online copy, just one curl command is necessary, yes.

Jörg

On Thu, Oct 16, 2014 at 7:55 PM, eunever32@gmail.com wrote:

Hi

I can see there are lots of utilities to copy the contents of an index
such as
elasticdump
reindexer
streames
etc

And they mostly use scan scroll.

Is there a single curl command to copy an index to a new index?

Without too much investigation it looks like scan scroll requires repeated
calls?

Can you please confirm?

If this is the case what is the simplest supported utility?

Alternatively is there a plugin with front end to choose from and to index?

Thanks in advance

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/1caeebf5-44de-4eba-ad5a-c702461bf3d2%40googlegroups.com
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAKdsXoHaUo1mF5xjjyvObT7MoXkiu20WrN1kJi-uPt1oOFdKEA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Jorg,

That is exactly the kind of thing I'm looking for.

I'm having a little bit of difficulty getting it to do what I want.

I want to "push" an index to another index and change the mapping.

I can import / export okay but the push is having difficulty picking up the
new mappings.

The syntax for push seems to be to specify the name of the mapping file
which in may case is in /tmp/testpu_doc_mapping.json

and this contains:
{
"doc": {
"_timestamp": {
"enabled": true,
"store": true,
"path": "date"
},
"properties": {
"date": {
"type": "date",
"format": "dateOptionalTime"
},
"sentence": {
"type": "string",

  •              "index": "not_analyzed"*
             },
             "value": {
                "type": "long"
             }
          }
       }
    

}

Note I want sentence to be not_analyzed
Maybe syntax of above file is not correct?
I tried other variations.
And when it says add mapping _default : that's probably not a good sign?

I then issue command:

curl -XPOST
'localhost:9200/test/_push?map={"test":"testpu"}&{"test_doc_mapping":"/tmp/testpu_doc_mapping.json"}'
But this is clearly wrong
Server shows:

[2014-10-19 01:10:34,216][INFO ][BaseTransportClient ] creating
transport client, java version 1.7.0_40, effective
settings {host=localhost, port=9300, cluster.name=elasticsearch,
timeout=30s, client.transport.sniff=true, client.transp
ort.ping_timeout=30s, client.transport.ignore_cluster_name=true,
path.plugins=.dontexist}
[2014-10-19 01:10:34,218][INFO ][plugins ] [Left Hand]
loaded [], sites []
[2014-10-19 01:10:34,238][INFO ][BaseTransportClient ] transport
client settings = {host=localhost, port=9300, clus
ter.name=elasticsearch, timeout=30s, client.transport.sniff=true,
client.transport.ping_timeout=30s, client.transport.ig
nore_cluster_name=true, path.plugins=.dontexist,
path.home=C:\elasticsearch-1.3.4, name=Left Hand, path.logs=C:/elastics
earch-1.3.4/logs, network.server=false, node.client=true}
[2014-10-19 01:10:34,239][INFO ][BaseTransportClient ] adding custom
address for transport client: inet[localhost/1
27.0.0.1:9300]
[2014-10-19 01:10:34,246][INFO ][BaseTransportClient ] configured
addresses to connect = [inet[localhost/127.0.0.1:
9300]], waiting for 30 seconds to connect ...
[2014-10-19 01:11:04,247][INFO ][BaseTransportClient ] connected
nodes = [[Logan][-4NzM7wxQ6S8IEK-aOST1Q][zippity][ine
t[/192.168.43.250:9300]],
[#transport#-1][zippity][inet[localhost/127.0.0.1:9300]]]
[2014-10-19 01:11:04,247][INFO ][BaseTransportClient ] new connection
to [Logan][-4NzM7wxQ6S8IEK-aOST1Q][zippity][inet
[/192.168.43.250:9300]]
[2014-10-19 01:11:04,248][INFO ][BaseTransportClient ] new connection
to [#transport#-1][zippity][inet[localhost/127.0
.0.1:9300]]
[2014-10-19 01:11:04,248][INFO ][BaseTransportClient ] trying to
discover more nodes...
[2014-10-19 01:11:04,254][INFO ][BaseTransportClient ] adding
discovered node [Logan][-4NzM7wxQ6S8IEK-aOST1Q][zippity]
[inet[/192.168.43.250:9300]]
[2014-10-19 01:11:04,258][INFO ][BaseTransportClient ] ... discovery
done
[2014-10-19 01:11:04,259][INFO ][KnapsackService ] add:
plugin.knapsack.export.state -> []
[2014-10-19 01:11:04,259][INFO ][KnapsackPushAction ] start of push:
{"mode":"push","started":"2014-10-19T00:11:04
.259Z","node_name":"Logan"}
[2014-10-19 01:11:04,259][INFO ][KnapsackService ] update cluster
settings: plugin.knapsack.export.state -> [{"
mode":"push","started":"2014-10-19T00:11:04.259Z","node_name":"Logan"}]
[2014-10-19 01:11:04,259][INFO ][KnapsackPushAction ]
map={test=testpu}
[2014-10-19 01:11:04,260][INFO ][KnapsackPushAction ] getting
settings for indices [test]
[2014-10-19 01:11:04,261][INFO ][KnapsackPushAction ] found indices:
[test]
[2014-10-19 01:11:04,261][INFO ][KnapsackPushAction ] getting
mappings for index test and types []
[2014-10-19 01:11:04,262][INFO ][KnapsackPushAction ] found
mappings: [default, doc]
[2014-10-19 01:11:04,263][INFO ][KnapsackPushAction ] adding
mapping: default
[2014-10-19 01:11:04,263][INFO ][KnapsackPushAction ] adding
mapping: doc
[2014-10-19 01:11:04,263][INFO ][KnapsackPushAction ] creating
index: testpu
[2014-10-19 01:11:04,296][INFO ][cluster.metadata ] [Logan]
[testpu] creating index, cause [api], shards [5]/[1]
, mappings [default, doc]
[2014-10-19 01:11:04,374][INFO ][KnapsackPushAction ] index created:
testpu
[2014-10-19 01:11:04,374][INFO ][KnapsackPushAction ] getting
aliases for index test
[2014-10-19 01:11:04,374][INFO ][KnapsackPushAction ] found 0 aliases
[2014-10-19 01:11:04,375][INFO ][BulkTransportClient ] flushing bulk
processor
[2014-10-19 01:11:04,376][INFO ][BulkTransportClient ] before bulk
[1] [actions=3] [bytes=404] [concurrent requests
=0]
[2014-10-19 01:11:04,418][INFO ][BulkTransportClient ] after bulk [1]
[succeeded=3] [failed=0] [41ms] [concurrent r
equests=0]
[2014-10-19 01:11:04,419][INFO ][BulkTransportClient ] closing bulk
processor...
[2014-10-19 01:11:04,419][INFO ][BulkTransportClient ] shutting
down...
[2014-10-19 01:11:04,427][INFO ][BulkTransportClient ] shutting down
completed
[2014-10-19 01:11:04,427][INFO ][KnapsackPushAction ] end of push:
{"mode":"push","started":"2014-10-19T00:11:04.2
59Z","node_name":"Logan"}, count = 3
[2014-10-19 01:11:04,428][INFO ][KnapsackService ] remove:
plugin.knapsack.export.state -> [{"mode":"push","sta
rted":"2014-10-19T00:11:04.259Z","node_name":"Logan"}]
[2014-10-19 01:11:04,428][INFO ][KnapsackService ] update cluster
settings: plugin.knapsack.export.state -> []

But seems that no matter what value I put for test_doc_mapping it doesn't
find the mapping file?

It creates testpu with same mappings as test and copies the test data into
testpu index.

If I try this command:
curl -XPOST
'localhost:9200/test/_push?map={"test":"testpu"}&test_doc_mapping=/tmp/testpu_doc_mapping.json'

curl doesn't like it:
{"error":"JsonParseException[Unexpected character ('"' (code 34)): was
expecting either '*' or '/' for a comment\n at [Source:
/"test":"testpu"/; line: 1,

I finally decided to do this:
curl -XPOST 'localhost:9200/test/_push?map={"test":"testpu"}' -d
'test_doc_mapping=/tmp/testpu_doc_mapping.json'

which might have been better but the server said:

[2014-10-19 01:52:08,241][INFO ][BaseTransportClient ] connected
nodes = [[Logan][-4NzM7wxQ6S8IEK-aOST1Q][Paul][in
t[/192.168.43.250:9300]],
[#transport#-1][Paul][inet[localhost/127.0.0.1:9300]]]
[2014-10-19 01:52:08,241][INFO ][BaseTransportClient ] new connection
to [Logan][-4NzM7wxQ6S8IEK-aOST1Q][Paul][ine
[/192.168.43.250:9300]]
[2014-10-19 01:52:08,242][INFO ][BaseTransportClient ] new connection
to [#transport#-1][Paul][inet[localhost/127.
.0.1:9300]]
[2014-10-19 01:52:08,242][INFO ][BaseTransportClient ] trying to
discover more nodes...
[2014-10-19 01:52:08,247][INFO ][BaseTransportClient ] adding
discovered node [Logan][-4NzM7wxQ6S8IEK-aOST1Q][Paul
[inet[/192.168.43.250:9300]]
[2014-10-19 01:52:08,251][INFO ][BaseTransportClient ] ... discovery
done
[2014-10-19 01:52:08,252][INFO ][KnapsackService ] add:
plugin.knapsack.export.state -> []
[2014-10-19 01:52:08,252][INFO ][KnapsackPushAction ] start of push:
{"mode":"push","started":"2014-10-19T00:52:0
.252Z","node_name":"Logan"}
[2014-10-19 01:52:08,253][INFO ][KnapsackService ] update cluster
settings: plugin.knapsack.export.state -> [{
mode":"push","started":"2014-10-19T00:52:08.252Z","node_name":"Logan"}]
[2014-10-19 01:52:08,253][INFO ][KnapsackPushAction ]
map={test=testpu}
[2014-10-19 01:52:08,254][INFO ][KnapsackPushAction ] getting
settings for indices [test]
[2014-10-19 01:52:08,255][INFO ][KnapsackPushAction ] found indices:
[test]
[2014-10-19 01:52:08,255][INFO ][KnapsackPushAction ] getting
mappings for index test and types []
[2014-10-19 01:52:08,256][INFO ][KnapsackPushAction ] found
mappings: [default, doc]
[2014-10-19 01:52:08,256][INFO ][KnapsackPushAction ] adding
mapping: default
[2014-10-19 01:52:08,257][INFO ][KnapsackPushAction ] adding
mapping: doc
[2014-10-19 01:52:08,257][INFO ][KnapsackPushAction ] creating
index: testpu
[2014-10-19 01:52:08,286][INFO ][cluster.metadata ] [Logan]
[testpu] creating index, cause [api], shards [5]/[1
, mappings [default, doc]
[2014-10-19 01:52:08,362][INFO ][KnapsackPushAction ] index created:
testpu
[2014-10-19 01:52:08,362][INFO ][KnapsackPushAction ] getting
aliases for index test
[2014-10-19 01:52:08,362][INFO ][KnapsackPushAction ] found 0 aliases
[2014-10-19 01:52:08,363][DEBUG][action.search.type ] [Logan]
[test][2], node[-4NzM7wxQ6S8IEK-aOST1Q], [P], s[STA
TED]: Failed to execute
[org.elasticsearch.action.search.SearchRequest@2a6319a4] lastShard [true]
org.elasticsearch.search.SearchParseException: [test][2]:
from[-1],size[-1]: Parse Failure [Failed to parse source [_na
]]
at
org.elasticsearch.search.SearchService.parseSource(SearchService.java:660)
at
org.elasticsearch.search.SearchService.createContext(SearchService.java:516)
at
org.elasticsearch.search.SearchService.createAndPutContext(SearchService.java:488)
at
org.elasticsearch.search.SearchService.executeScan(SearchService.java:207)
at
org.elasticsearch.search.action.SearchServiceTransportAction$19.call(SearchServiceTransportAction.java:444)
at
org.elasticsearch.search.action.SearchServiceTransportAction$19.call(SearchServiceTransportAction.java:441)
at
org.elasticsearch.search.action.SearchServiceTransportAction$23.run(SearchServiceTransportAction.java:517)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724)
Caused by: org.elasticsearch.ElasticsearchParseException: Failed to derive
xcontent from org.elasticsearch.common.bytes
ChannelBufferBytesReference@83ae6e1b
at
org.elasticsearch.common.xcontent.XContentFactory.xContent(XContentFactory.java:259)
at
org.elasticsearch.search.SearchService.parseSource(SearchService.java:630)
... 9 more
[2014-10-19 01:52:08,363][DEBUG][action.search.type ] [Logan]
[test][4], node[-4NzM7wxQ6S8IEK-aOST1Q], [P], s[STA
TED]: Failed to execute
[org.elasticsearch.action.search.SearchRequest@2a6319a4] lastShard [true]
org.elasticsearch.search.SearchParseException: [test][4]:
from[-1],size[-1]: Parse Failure [Failed to parse source [_na
]]
at
org.elasticsearch.search.SearchService.parseSource(SearchService.java:660)
at
org.elasticsearch.search.SearchService.createContext(SearchService.java:516)
at
org.elasticsearch.search.SearchService.createAndPutContext(SearchService.java:488)
at
org.elasticsearch.search.SearchService.executeScan(SearchService.java:207)
at
org.elasticsearch.search.action.SearchServiceTransportAction$19.call(SearchServiceTransportAction.java:444)
at
org.elasticsearch.search.action.SearchServiceTransportAction$19.call(SearchServiceTransportAction.java:441)
at
org.elasticsearch.search.action.SearchServiceTransportAction$23.run(SearchServiceTransportAction.java:517)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724)
Caused by: org.elasticsearch.ElasticsearchParseException: Failed to derive
xcontent from org.elasticsearch.common.bytes
ChannelBufferBytesReference@83ae6e1b
at
org.elasticsearch.common.xcontent.XContentFactory.xContent(XContentFactory.java:259)
at
org.elasticsearch.search.SearchService.parseSource(SearchService.java:630)
... 9 more
[2014-10-19 01:52:08,363][DEBUG][action.search.type ] [Logan]
[test][3], node[-4NzM7wxQ6S8IEK-aOST1Q], [P], s[STA
TED]: Failed to execute
[org.elasticsearch.action.search.SearchRequest@2a6319a4] lastShard [true]
org.elasticsearch.search.SearchParseException: [test][3]:
from[-1],size[-1]: Parse Failure [Failed to parse source [_na
]]
at
org.elasticsearch.search.SearchService.parseSource(SearchService.java:660)
at
org.elasticsearch.search.SearchService.createContext(SearchService.java:516)
at
org.elasticsearch.search.SearchService.createAndPutContext(SearchService.java:488)
at
org.elasticsearch.search.SearchService.executeScan(SearchService.java:207)
at
org.elasticsearch.search.action.SearchServiceTransportAction$19.call(SearchServiceTransportAction.java:444)
at
org.elasticsearch.search.action.SearchServiceTransportAction$19.call(SearchServiceTransportAction.java:441)
at
org.elasticsearch.search.action.SearchServiceTransportAction$23.run(SearchServiceTransportAction.java:517)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724)
Caused by: org.elasticsearch.ElasticsearchParseException: Failed to derive
xcontent from org.elasticsearch.common.bytes
ChannelBufferBytesReference@83ae6e1b
at
org.elasticsearch.common.xcontent.XContentFactory.xContent(XContentFactory.java:259)
at
org.elasticsearch.search.SearchService.parseSource(SearchService.java:630)
... 9 more
[2014-10-19 01:52:08,364][DEBUG][action.search.type ] [Logan]
[test][0], node[-4NzM7wxQ6S8IEK-aOST1Q], [P], s[STA
TED]: Failed to execute
[org.elasticsearch.action.search.SearchRequest@2a6319a4]
org.elasticsearch.search.SearchParseException: [test][0]:
from[-1],size[-1]: Parse Failure [Failed to parse source [na
]]
at
org.elasticsearch.search.SearchService.parseSource(SearchService.java:660)
at
org.elasticsearch.search.SearchService.createContext(SearchService.java:516)
at
org.elasticsearch.search.SearchService.createAndPutContext(SearchService.java:488)
at
org.elasticsearch.search.SearchService.executeScan(SearchService.java:207)
at
org.elasticsearch.search.action.SearchServiceTransportAction$19.call(SearchServiceTransportAction.java:444)
at
org.elasticsearch.search.action.SearchServiceTransportAction$19.call(SearchServiceTransportAction.java:441)
at
org.elasticsearch.search.action.SearchServiceTransportAction$23.run(SearchServiceTransportAction.java:517)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724)
Caused by: org.elasticsearch.ElasticsearchParseException: Failed to derive
xcontent from org.elasticsearch.common.bytes
ChannelBufferBytesReference@83ae6e1b
at
org.elasticsearch.common.xcontent.XContentFactory.xContent(XContentFactory.java:259)
at
org.elasticsearch.search.SearchService.parseSource(SearchService.java:630)
... 9 more
[2014-10-19 01:52:08,364][DEBUG][action.search.type ] [Logan]
[test][1], node[-4NzM7wxQ6S8IEK-aOST1Q], [P], s[STA
TED]: Failed to execute
[org.elasticsearch.action.search.SearchRequest@2a6319a4] lastShard [true]
org.elasticsearch.search.SearchParseException: [test][1]:
from[-1],size[-1]: Parse Failure [Failed to parse source [na
]]
at
org.elasticsearch.search.SearchService.parseSource(SearchService.java:660)
at
org.elasticsearch.search.SearchService.createContext(SearchService.java:516)
at
org.elasticsearch.search.SearchService.createAndPutContext(SearchService.java:488)
at
org.elasticsearch.search.SearchService.executeScan(SearchService.java:207)
at
org.elasticsearch.search.action.SearchServiceTransportAction$19.call(SearchServiceTransportAction.java:444)
at
org.elasticsearch.search.action.SearchServiceTransportAction$19.call(SearchServiceTransportAction.java:441)
at
org.elasticsearch.search.action.SearchServiceTransportAction$23.run(SearchServiceTransportAction.java:517)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724)
Caused by: org.elasticsearch.ElasticsearchParseException: Failed to derive
xcontent from org.elasticsearch.common.bytes
ChannelBufferBytesReference@83ae6e1b
at
org.elasticsearch.common.xcontent.XContentFactory.xContent(XContentFactory.java:259)
at
org.elasticsearch.search.SearchService.parseSource(SearchService.java:630)
... 9 more
[2014-10-19 01:52:08,376][DEBUG][action.search.type ] [Logan] All
shards failed for phase: [init_scan]
[2014-10-19 01:52:08,385][ERROR][KnapsackPushAction ] Failed to
execute phase [init_scan], all shards failed; sha
dFailures {[-4NzM7wxQ6S8IEK-aOST1Q][test][0]:
SearchParseException[[test][0]: from[-1],size[-1]: Parse Failure [Failed
o parse source [na]]]; nested: ElasticsearchParseException[Failed to
derive xcontent from org.elasticsearch.common.by
es.ChannelBufferBytesReference@83ae6e1b];
}{[-4NzM7wxQ6S8IEK-aOST1Q][test][1]: SearchParseException[[test][1]:
from[-1]
size[-1]: Parse Failure [Failed to parse source [na]]]; nested:
ElasticsearchParseException[Failed to derive xcontent
from org.elasticsearch.common.bytes.ChannelBufferBytesReference@83ae6e1b];
}{[-4NzM7wxQ6S8IEK-aOST1Q][test][2]: SearchP
rseException[[test][2]: from[-1],size[-1]: Parse Failure [Failed to parse
source [na]]]; nested: ElasticsearchParseEx
eption[Failed to derive xcontent from
org.elasticsearch.common.bytes.ChannelBufferBytesReference@83ae6e1b];
}{[-4NzM7wx
6S8IEK-aOST1Q][test][3]: SearchParseException[[test][3]:
from[-1],size[-1]: Parse Failure [Failed to parse source [na
]]; nested: ElasticsearchParseException[Failed to derive xcontent from
org.elasticsearch.common.bytes.ChannelBufferByte
Reference@83ae6e1b]; }{[-4NzM7wxQ6S8IEK-aOST1Q][test][4]:
SearchParseException[[test][4]: from[-1],size[-1]: Parse Fail
re [Failed to parse source [na]]]; nested:
ElasticsearchParseException[Failed to derive xcontent from org.elasticsear
h.common.bytes.ChannelBufferBytesReference@83ae6e1b]; }
org.elasticsearch.action.search.SearchPhaseExecutionException: Failed to
execute phase [init_scan], all shards failed;
hardFailures {[-4NzM7wxQ6S8IEK-aOST1Q][test][0]:
SearchParseException[[test][0]: from[-1],size[-1]: Parse Failure [Fail
d to parse source [na]]]; nested: ElasticsearchParseException[Failed to
derive xcontent from org.elasticsearch.common
bytes.ChannelBufferBytesReference@83ae6e1b];
}{[-4NzM7wxQ6S8IEK-aOST1Q][test][1]: SearchParseException[[test][1]: from[
1],size[-1]: Parse Failure [Failed to parse source [na]]]; nested:
ElasticsearchParseException[Failed to derive xcont
nt from
org.elasticsearch.common.bytes.ChannelBufferBytesReference@83ae6e1b];
}{[-4NzM7wxQ6S8IEK-aOST1Q][test][2]: Sear
hParseException[[test][2]: from[-1],size[-1]: Parse Failure [Failed to
parse source [na]]]; nested: ElasticsearchPars
Exception[Failed to derive xcontent from
org.elasticsearch.common.bytes.ChannelBufferBytesReference@83ae6e1b];
}{[-4NzM
wxQ6S8IEK-aOST1Q][test][3]: SearchParseException[[test][3]:
from[-1],size[-1]: Parse Failure [Failed to parse source [

a
]]]; nested: ElasticsearchParseException[Failed to derive xcontent from
org.elasticsearch.common.bytes.ChannelBufferB
tesReference@83ae6e1b]; }{[-4NzM7wxQ6S8IEK-aOST1Q][test][4]:
SearchParseException[[test][4]: from[-1],size[-1]: Parse F
ilure [Failed to parse source [na]]]; nested:
ElasticsearchParseException[Failed to derive xcontent from org.elastics
arch.common.bytes.ChannelBufferBytesReference@83ae6e1b]; }
at
org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.onFirstPhaseResult(TransportS
archTypeAction.java:233)
at
org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction$1.onFailure(TransportSearchTy
eAction.java:179)
at
org.elasticsearch.search.action.SearchServiceTransportAction$23.run(SearchServiceTransportAction.java:523)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724)
[2014-10-19 01:52:08,396][INFO ][KnapsackService ] remove:
plugin.knapsack.export.state -> [{"mode":"push","st
rted":"2014-10-19T00:52:08.252Z","node_name":"Logan"}]
[2014-10-19 01:52:08,399][INFO ][KnapsackService ] update cluster
settings: plugin.knapsack.export.state -> []

Any pointers please.

Secondly, I had difficulty getting the above command to work in "sense" . I
would find it easier if I could use "sense" to issue commands.

Thirdly, while this is what I want: is there a more full-featured,
operations-ready, GUI based tool with the same functionality?

I appreciate your help.

Regards.
On Friday, October 17, 2014 4:10:11 PM UTC+1, Jörg Prante wrote:

You can use the knapsack plugin for export/import data and change mappings
(and much more!)

For a 1:1 online copy, just one curl command is necessary, yes.

https://github.com/jprante/elasticsearch-knapsack

Jörg

On Thu, Oct 16, 2014 at 7:55 PM, <eune...@gmail.com <javascript:>> wrote:

Hi

I can see there are lots of utilities to copy the contents of an index
such as
elasticdump
reindexer
streames
etc

And they mostly use scan scroll.

Is there a single curl command to copy an index to a new index?

Without too much investigation it looks like scan scroll requires
repeated calls?

Can you please confirm?

If this is the case what is the simplest supported utility?

Alternatively is there a plugin with front end to choose from and to
index?

Thanks in advance

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearc...@googlegroups.com <javascript:>.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/1caeebf5-44de-4eba-ad5a-c702461bf3d2%40googlegroups.com
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/b9c20772-f9a2-4a2f-aae1-8596961eb84c%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

A better idea than fiddling with mapping in the HTTP GET/POST parameter is
to pre-create an empty target index like you want, and after that, push the
docs with a knapsack command by the "map" parameter.

I had also the idea to redesign the knapsack arguments from GET/POST
parameter names to structured POST request bodies in JSON, so sense would
be helpful in JSON editing. Since knapsack is not part of standard ES I
doubt there will be syntax-check assistance.

Jörg

On Sun, Oct 19, 2014 at 2:55 AM, eunever32@gmail.com wrote:

Jorg,

That is exactly the kind of thing I'm looking for.

I'm having a little bit of difficulty getting it to do what I want.

I want to "push" an index to another index and change the mapping.

I can import / export okay but the push is having difficulty picking up
the new mappings.

The syntax for push seems to be to specify the name of the mapping file
which in may case is in /tmp/testpu_doc_mapping.json

and this contains:
{
"doc": {
"_timestamp": {
"enabled": true,
"store": true,
"path": "date"
},
"properties": {
"date": {
"type": "date",
"format": "dateOptionalTime"
},
"sentence": {
"type": "string",

  •              "index": "not_analyzed"*
             },
             "value": {
                "type": "long"
             }
          }
       }
    

}

Note I want sentence to be not_analyzed
Maybe syntax of above file is not correct?
I tried other variations.
And when it says add mapping _default : that's probably not a good sign?

I then issue command:

curl -XPOST
'localhost:9200/test/_push?map={"test":"testpu"}&{"test_doc_mapping":"/tmp/testpu_doc_mapping.json"}'
But this is clearly wrong
Server shows:

[2014-10-19 01:10:34,216][INFO ][BaseTransportClient ] creating
transport client, java version 1.7.0_40, effective
settings {host=localhost, port=9300, cluster.name=elasticsearch,
timeout=30s, client.transport.sniff=true, client.transp
ort.ping_timeout=30s, client.transport.ignore_cluster_name=true,
path.plugins=.dontexist}
[2014-10-19 01:10:34,218][INFO ][plugins ] [Left Hand]
loaded [], sites []
[2014-10-19 01:10:34,238][INFO ][BaseTransportClient ] transport
client settings = {host=localhost, port=9300, clus
ter.name=elasticsearch, timeout=30s, client.transport.sniff=true,
client.transport.ping_timeout=30s, client.transport.ig
nore_cluster_name=true, path.plugins=.dontexist,
path.home=C:\elasticsearch-1.3.4, name=Left Hand, path.logs=C:/elastics
earch-1.3.4/logs, network.server=false, node.client=true}
[2014-10-19 01:10:34,239][INFO ][BaseTransportClient ] adding custom
address for transport client: inet[localhost/1
27.0.0.1:9300]
[2014-10-19 01:10:34,246][INFO ][BaseTransportClient ] configured
addresses to connect = [inet[localhost/127.0.0.1:
9300]], waiting for 30 seconds to connect ...
[2014-10-19 01:11:04,247][INFO ][BaseTransportClient ] connected
nodes = [[Logan][-4NzM7wxQ6S8IEK-aOST1Q][zippity][ine
t[/192.168.43.250:9300]],
[#transport#-1][zippity][inet[localhost/127.0.0.1:9300]]]
[2014-10-19 01:11:04,247][INFO ][BaseTransportClient ] new
connection to [Logan][-4NzM7wxQ6S8IEK-aOST1Q][zippity][inet
[/192.168.43.250:9300]]
[2014-10-19 01:11:04,248][INFO ][BaseTransportClient ] new
connection to [#transport#-1][zippity][inet[localhost/127.0
.0.1:9300]]
[2014-10-19 01:11:04,248][INFO ][BaseTransportClient ] trying to
discover more nodes...
[2014-10-19 01:11:04,254][INFO ][BaseTransportClient ] adding
discovered node [Logan][-4NzM7wxQ6S8IEK-aOST1Q][zippity]
[inet[/192.168.43.250:9300]]
[2014-10-19 01:11:04,258][INFO ][BaseTransportClient ] ... discovery
done
[2014-10-19 01:11:04,259][INFO ][KnapsackService ] add:
plugin.knapsack.export.state -> []
[2014-10-19 01:11:04,259][INFO ][KnapsackPushAction ] start of
push: {"mode":"push","started":"2014-10-19T00:11:04
.259Z","node_name":"Logan"}
[2014-10-19 01:11:04,259][INFO ][KnapsackService ] update
cluster settings: plugin.knapsack.export.state -> [{"
mode":"push","started":"2014-10-19T00:11:04.259Z","node_name":"Logan"}]
[2014-10-19 01:11:04,259][INFO ][KnapsackPushAction ]
map={test=testpu}
[2014-10-19 01:11:04,260][INFO ][KnapsackPushAction ] getting
settings for indices [test]
[2014-10-19 01:11:04,261][INFO ][KnapsackPushAction ] found
indices: [test]
[2014-10-19 01:11:04,261][INFO ][KnapsackPushAction ] getting
mappings for index test and types []
[2014-10-19 01:11:04,262][INFO ][KnapsackPushAction ] found
mappings: [default, doc]
[2014-10-19 01:11:04,263][INFO ][KnapsackPushAction ] adding
mapping: default
[2014-10-19 01:11:04,263][INFO ][KnapsackPushAction ] adding
mapping: doc
[2014-10-19 01:11:04,263][INFO ][KnapsackPushAction ] creating
index: testpu
[2014-10-19 01:11:04,296][INFO ][cluster.metadata ] [Logan]
[testpu] creating index, cause [api], shards [5]/[1]
, mappings [default, doc]
[2014-10-19 01:11:04,374][INFO ][KnapsackPushAction ] index
created: testpu
[2014-10-19 01:11:04,374][INFO ][KnapsackPushAction ] getting
aliases for index test
[2014-10-19 01:11:04,374][INFO ][KnapsackPushAction ] found 0
aliases
[2014-10-19 01:11:04,375][INFO ][BulkTransportClient ] flushing bulk
processor
[2014-10-19 01:11:04,376][INFO ][BulkTransportClient ] before bulk
[1] [actions=3] [bytes=404] [concurrent requests
=0]
[2014-10-19 01:11:04,418][INFO ][BulkTransportClient ] after bulk
[1] [succeeded=3] [failed=0] [41ms] [concurrent r
equests=0]
[2014-10-19 01:11:04,419][INFO ][BulkTransportClient ] closing bulk
processor...
[2014-10-19 01:11:04,419][INFO ][BulkTransportClient ] shutting
down...
[2014-10-19 01:11:04,427][INFO ][BulkTransportClient ] shutting down
completed
[2014-10-19 01:11:04,427][INFO ][KnapsackPushAction ] end of push:
{"mode":"push","started":"2014-10-19T00:11:04.2
59Z","node_name":"Logan"}, count = 3
[2014-10-19 01:11:04,428][INFO ][KnapsackService ] remove:
plugin.knapsack.export.state -> [{"mode":"push","sta
rted":"2014-10-19T00:11:04.259Z","node_name":"Logan"}]
[2014-10-19 01:11:04,428][INFO ][KnapsackService ] update
cluster settings: plugin.knapsack.export.state -> []

But seems that no matter what value I put for test_doc_mapping it doesn't
find the mapping file?

It creates testpu with same mappings as test and copies the test data into
testpu index.

If I try this command:
curl -XPOST
'localhost:9200/test/_push?map={"test":"testpu"}&test_doc_mapping=/tmp/testpu_doc_mapping.json'

curl doesn't like it:
{"error":"JsonParseException[Unexpected character ('"' (code 34)): was
expecting either '*' or '/' for a comment\n at [Source:
/"test":"testpu"/; line: 1,

I finally decided to do this:
curl -XPOST 'localhost:9200/test/_push?map={"test":"testpu"}' -d
'test_doc_mapping=/tmp/testpu_doc_mapping.json'

which might have been better but the server said:

[2014-10-19 01:52:08,241][INFO ][BaseTransportClient ] connected
nodes = [[Logan][-4NzM7wxQ6S8IEK-aOST1Q][Paul][in
t[/192.168.43.250:9300]],
[#transport#-1][Paul][inet[localhost/127.0.0.1:9300]]]
[2014-10-19 01:52:08,241][INFO ][BaseTransportClient ] new
connection to [Logan][-4NzM7wxQ6S8IEK-aOST1Q][Paul][ine
[/192.168.43.250:9300]]
[2014-10-19 01:52:08,242][INFO ][BaseTransportClient ] new
connection to [#transport#-1][Paul][inet[localhost/127.
.0.1:9300]]
[2014-10-19 01:52:08,242][INFO ][BaseTransportClient ] trying to
discover more nodes...
[2014-10-19 01:52:08,247][INFO ][BaseTransportClient ] adding
discovered node [Logan][-4NzM7wxQ6S8IEK-aOST1Q][Paul
[inet[/192.168.43.250:9300]]
[2014-10-19 01:52:08,251][INFO ][BaseTransportClient ] ... discovery
done
[2014-10-19 01:52:08,252][INFO ][KnapsackService ] add:
plugin.knapsack.export.state -> []
[2014-10-19 01:52:08,252][INFO ][KnapsackPushAction ] start of
push: {"mode":"push","started":"2014-10-19T00:52:0
.252Z","node_name":"Logan"}
[2014-10-19 01:52:08,253][INFO ][KnapsackService ] update
cluster settings: plugin.knapsack.export.state -> [{
mode":"push","started":"2014-10-19T00:52:08.252Z","node_name":"Logan"}]
[2014-10-19 01:52:08,253][INFO ][KnapsackPushAction ]
map={test=testpu}
[2014-10-19 01:52:08,254][INFO ][KnapsackPushAction ] getting
settings for indices [test]
[2014-10-19 01:52:08,255][INFO ][KnapsackPushAction ] found
indices: [test]
[2014-10-19 01:52:08,255][INFO ][KnapsackPushAction ] getting
mappings for index test and types []
[2014-10-19 01:52:08,256][INFO ][KnapsackPushAction ] found
mappings: [default, doc]
[2014-10-19 01:52:08,256][INFO ][KnapsackPushAction ] adding
mapping: default
[2014-10-19 01:52:08,257][INFO ][KnapsackPushAction ] adding
mapping: doc
[2014-10-19 01:52:08,257][INFO ][KnapsackPushAction ] creating
index: testpu
[2014-10-19 01:52:08,286][INFO ][cluster.metadata ] [Logan]
[testpu] creating index, cause [api], shards [5]/[1
, mappings [default, doc]
[2014-10-19 01:52:08,362][INFO ][KnapsackPushAction ] index
created: testpu
[2014-10-19 01:52:08,362][INFO ][KnapsackPushAction ] getting
aliases for index test
[2014-10-19 01:52:08,362][INFO ][KnapsackPushAction ] found 0
aliases
[2014-10-19 01:52:08,363][DEBUG][action.search.type ] [Logan]
[test][2], node[-4NzM7wxQ6S8IEK-aOST1Q], [P], s[STA
TED]: Failed to execute
[org.elasticsearch.action.search.SearchRequest@2a6319a4] lastShard [true]
org.elasticsearch.search.SearchParseException: [test][2]:
from[-1],size[-1]: Parse Failure [Failed to parse source [_na
]]
at
org.elasticsearch.search.SearchService.parseSource(SearchService.java:660)
at
org.elasticsearch.search.SearchService.createContext(SearchService.java:516)
at
org.elasticsearch.search.SearchService.createAndPutContext(SearchService.java:488)
at
org.elasticsearch.search.SearchService.executeScan(SearchService.java:207)
at
org.elasticsearch.search.action.SearchServiceTransportAction$19.call(SearchServiceTransportAction.java:444)
at
org.elasticsearch.search.action.SearchServiceTransportAction$19.call(SearchServiceTransportAction.java:441)
at
org.elasticsearch.search.action.SearchServiceTransportAction$23.run(SearchServiceTransportAction.java:517)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724)
Caused by: org.elasticsearch.ElasticsearchParseException: Failed to
derive xcontent from org.elasticsearch.common.bytes
ChannelBufferBytesReference@83ae6e1b
at
org.elasticsearch.common.xcontent.XContentFactory.xContent(XContentFactory.java:259)
at
org.elasticsearch.search.SearchService.parseSource(SearchService.java:630)
... 9 more
[2014-10-19 01:52:08,363][DEBUG][action.search.type ] [Logan]
[test][4], node[-4NzM7wxQ6S8IEK-aOST1Q], [P], s[STA
TED]: Failed to execute
[org.elasticsearch.action.search.SearchRequest@2a6319a4] lastShard [true]
org.elasticsearch.search.SearchParseException: [test][4]:
from[-1],size[-1]: Parse Failure [Failed to parse source [_na
]]
at
org.elasticsearch.search.SearchService.parseSource(SearchService.java:660)
at
org.elasticsearch.search.SearchService.createContext(SearchService.java:516)
at
org.elasticsearch.search.SearchService.createAndPutContext(SearchService.java:488)
at
org.elasticsearch.search.SearchService.executeScan(SearchService.java:207)
at
org.elasticsearch.search.action.SearchServiceTransportAction$19.call(SearchServiceTransportAction.java:444)
at
org.elasticsearch.search.action.SearchServiceTransportAction$19.call(SearchServiceTransportAction.java:441)
at
org.elasticsearch.search.action.SearchServiceTransportAction$23.run(SearchServiceTransportAction.java:517)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724)
Caused by: org.elasticsearch.ElasticsearchParseException: Failed to
derive xcontent from org.elasticsearch.common.bytes
ChannelBufferBytesReference@83ae6e1b
at
org.elasticsearch.common.xcontent.XContentFactory.xContent(XContentFactory.java:259)
at
org.elasticsearch.search.SearchService.parseSource(SearchService.java:630)
... 9 more
[2014-10-19 01:52:08,363][DEBUG][action.search.type ] [Logan]
[test][3], node[-4NzM7wxQ6S8IEK-aOST1Q], [P], s[STA
TED]: Failed to execute
[org.elasticsearch.action.search.SearchRequest@2a6319a4] lastShard [true]
org.elasticsearch.search.SearchParseException: [test][3]:
from[-1],size[-1]: Parse Failure [Failed to parse source [_na
]]
at
org.elasticsearch.search.SearchService.parseSource(SearchService.java:660)
at
org.elasticsearch.search.SearchService.createContext(SearchService.java:516)
at
org.elasticsearch.search.SearchService.createAndPutContext(SearchService.java:488)
at
org.elasticsearch.search.SearchService.executeScan(SearchService.java:207)
at
org.elasticsearch.search.action.SearchServiceTransportAction$19.call(SearchServiceTransportAction.java:444)
at
org.elasticsearch.search.action.SearchServiceTransportAction$19.call(SearchServiceTransportAction.java:441)
at
org.elasticsearch.search.action.SearchServiceTransportAction$23.run(SearchServiceTransportAction.java:517)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724)
Caused by: org.elasticsearch.ElasticsearchParseException: Failed to
derive xcontent from org.elasticsearch.common.bytes
ChannelBufferBytesReference@83ae6e1b
at
org.elasticsearch.common.xcontent.XContentFactory.xContent(XContentFactory.java:259)
at
org.elasticsearch.search.SearchService.parseSource(SearchService.java:630)
... 9 more
[2014-10-19 01:52:08,364][DEBUG][action.search.type ] [Logan]
[test][0], node[-4NzM7wxQ6S8IEK-aOST1Q], [P], s[STA
TED]: Failed to execute
[org.elasticsearch.action.search.SearchRequest@2a6319a4]
org.elasticsearch.search.SearchParseException: [test][0]:
from[-1],size[-1]: Parse Failure [Failed to parse source [na
]]
at
org.elasticsearch.search.SearchService.parseSource(SearchService.java:660)
at
org.elasticsearch.search.SearchService.createContext(SearchService.java:516)
at
org.elasticsearch.search.SearchService.createAndPutContext(SearchService.java:488)
at
org.elasticsearch.search.SearchService.executeScan(SearchService.java:207)
at
org.elasticsearch.search.action.SearchServiceTransportAction$19.call(SearchServiceTransportAction.java:444)
at
org.elasticsearch.search.action.SearchServiceTransportAction$19.call(SearchServiceTransportAction.java:441)
at
org.elasticsearch.search.action.SearchServiceTransportAction$23.run(SearchServiceTransportAction.java:517)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724)
Caused by: org.elasticsearch.ElasticsearchParseException: Failed to
derive xcontent from org.elasticsearch.common.bytes
ChannelBufferBytesReference@83ae6e1b
at
org.elasticsearch.common.xcontent.XContentFactory.xContent(XContentFactory.java:259)
at
org.elasticsearch.search.SearchService.parseSource(SearchService.java:630)
... 9 more
[2014-10-19 01:52:08,364][DEBUG][action.search.type ] [Logan]
[test][1], node[-4NzM7wxQ6S8IEK-aOST1Q], [P], s[STA
TED]: Failed to execute
[org.elasticsearch.action.search.SearchRequest@2a6319a4] lastShard [true]
org.elasticsearch.search.SearchParseException: [test][1]:
from[-1],size[-1]: Parse Failure [Failed to parse source [na
]]
at
org.elasticsearch.search.SearchService.parseSource(SearchService.java:660)
at
org.elasticsearch.search.SearchService.createContext(SearchService.java:516)
at
org.elasticsearch.search.SearchService.createAndPutContext(SearchService.java:488)
at
org.elasticsearch.search.SearchService.executeScan(SearchService.java:207)
at
org.elasticsearch.search.action.SearchServiceTransportAction$19.call(SearchServiceTransportAction.java:444)
at
org.elasticsearch.search.action.SearchServiceTransportAction$19.call(SearchServiceTransportAction.java:441)
at
org.elasticsearch.search.action.SearchServiceTransportAction$23.run(SearchServiceTransportAction.java:517)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724)
Caused by: org.elasticsearch.ElasticsearchParseException: Failed to
derive xcontent from org.elasticsearch.common.bytes
ChannelBufferBytesReference@83ae6e1b
at
org.elasticsearch.common.xcontent.XContentFactory.xContent(XContentFactory.java:259)
at
org.elasticsearch.search.SearchService.parseSource(SearchService.java:630)
... 9 more
[2014-10-19 01:52:08,376][DEBUG][action.search.type ] [Logan] All
shards failed for phase: [init_scan]
[2014-10-19 01:52:08,385][ERROR][KnapsackPushAction ] Failed to
execute phase [init_scan], all shards failed; sha
dFailures {[-4NzM7wxQ6S8IEK-aOST1Q][test][0]:
SearchParseException[[test][0]: from[-1],size[-1]: Parse Failure [Failed
o parse source [na]]]; nested: ElasticsearchParseException[Failed to
derive xcontent from org.elasticsearch.common.by
es.ChannelBufferBytesReference@83ae6e1b];
}{[-4NzM7wxQ6S8IEK-aOST1Q][test][1]: SearchParseException[[test][1]:
from[-1]
size[-1]: Parse Failure [Failed to parse source [na]]]; nested:
ElasticsearchParseException[Failed to derive xcontent
from org.elasticsearch.common.bytes.ChannelBufferBytesReference@83ae6e1b];
}{[-4NzM7wxQ6S8IEK-aOST1Q][test][2]: SearchP
rseException[[test][2]: from[-1],size[-1]: Parse Failure [Failed to parse
source [na]]]; nested: ElasticsearchParseEx
eption[Failed to derive xcontent from
org.elasticsearch.common.bytes.ChannelBufferBytesReference@83ae6e1b];
}{[-4NzM7wx
6S8IEK-aOST1Q][test][3]: SearchParseException[[test][3]:
from[-1],size[-1]: Parse Failure [Failed to parse source [na
]]; nested: ElasticsearchParseException[Failed to derive xcontent from
org.elasticsearch.common.bytes.ChannelBufferByte
Reference@83ae6e1b]; }{[-4NzM7wxQ6S8IEK-aOST1Q][test][4]:
SearchParseException[[test][4]: from[-1],size[-1]: Parse Fail
re [Failed to parse source [na]]]; nested:
ElasticsearchParseException[Failed to derive xcontent from org.elasticsear
h.common.bytes.ChannelBufferBytesReference@83ae6e1b]; }
org.elasticsearch.action.search.SearchPhaseExecutionException: Failed to
execute phase [init_scan], all shards failed;
hardFailures {[-4NzM7wxQ6S8IEK-aOST1Q][test][0]:
SearchParseException[[test][0]: from[-1],size[-1]: Parse Failure [Fail
d to parse source [na]]]; nested: ElasticsearchParseException[Failed to
derive xcontent from org.elasticsearch.common
bytes.ChannelBufferBytesReference@83ae6e1b];
}{[-4NzM7wxQ6S8IEK-aOST1Q][test][1]: SearchParseException[[test][1]: from[
1],size[-1]: Parse Failure [Failed to parse source [na]]]; nested:
ElasticsearchParseException[Failed to derive xcont
nt from
org.elasticsearch.common.bytes.ChannelBufferBytesReference@83ae6e1b];
}{[-4NzM7wxQ6S8IEK-aOST1Q][test][2]: Sear
hParseException[[test][2]: from[-1],size[-1]: Parse Failure [Failed to
parse source [na]]]; nested: ElasticsearchPars
Exception[Failed to derive xcontent from
org.elasticsearch.common.bytes.ChannelBufferBytesReference@83ae6e1b];
}{[-4NzM
wxQ6S8IEK-aOST1Q][test][3]: SearchParseException[[test][3]:
from[-1],size[-1]: Parse Failure [Failed to parse source [

a
]]]; nested: ElasticsearchParseException[Failed to derive xcontent from
org.elasticsearch.common.bytes.ChannelBufferB
tesReference@83ae6e1b]; }{[-4NzM7wxQ6S8IEK-aOST1Q][test][4]:
SearchParseException[[test][4]: from[-1],size[-1]: Parse F
ilure [Failed to parse source [na]]]; nested:
ElasticsearchParseException[Failed to derive xcontent from org.elastics
arch.common.bytes.ChannelBufferBytesReference@83ae6e1b]; }
at
org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.onFirstPhaseResult(TransportS
archTypeAction.java:233)
at
org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction$1.onFailure(TransportSearchTy
eAction.java:179)
at
org.elasticsearch.search.action.SearchServiceTransportAction$23.run(SearchServiceTransportAction.java:523)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724)
[2014-10-19 01:52:08,396][INFO ][KnapsackService ] remove:
plugin.knapsack.export.state -> [{"mode":"push","st
rted":"2014-10-19T00:52:08.252Z","node_name":"Logan"}]
[2014-10-19 01:52:08,399][INFO ][KnapsackService ] update
cluster settings: plugin.knapsack.export.state -> []

Any pointers please.

Secondly, I had difficulty getting the above command to work in "sense" .
I would find it easier if I could use "sense" to issue commands.

Thirdly, while this is what I want: is there a more full-featured,
operations-ready, GUI based tool with the same functionality?

I appreciate your help.

Regards.
On Friday, October 17, 2014 4:10:11 PM UTC+1, Jörg Prante wrote:

You can use the knapsack plugin for export/import data and change
mappings (and much more!)

For a 1:1 online copy, just one curl command is necessary, yes.

https://github.com/jprante/elasticsearch-knapsack

Jörg

On Thu, Oct 16, 2014 at 7:55 PM, eune...@gmail.com wrote:

Hi

I can see there are lots of utilities to copy the contents of an index
such as
elasticdump
reindexer
streames
etc

And they mostly use scan scroll.

Is there a single curl command to copy an index to a new index?

Without too much investigation it looks like scan scroll requires
repeated calls?

Can you please confirm?

If this is the case what is the simplest supported utility?

Alternatively is there a plugin with front end to choose from and to
index?

Thanks in advance

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearc...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/
msgid/elasticsearch/1caeebf5-44de-4eba-ad5a-c702461bf3d2%
40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/b9c20772-f9a2-4a2f-aae1-8596961eb84c%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/b9c20772-f9a2-4a2f-aae1-8596961eb84c%40googlegroups.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAKdsXoGNLReM6xo13t%2BMnAhkypAqJ1%2B1NfnTdd9b0TLX62Bhxw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

OK I can try that
But is there an option in the _push to have a pre created index?

I know it's possible with import createIndex=false

Would export/import be just as good?

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/627108aa-8970-474d-bf5d-7aa3c3c4be73%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

I never thought about something like "pre-creation" because it would just
double the existing create index action...

Jörg

On Sun, Oct 19, 2014 at 6:00 PM, eunever32@gmail.com wrote:

OK I can try that
But is there an option in the _push to have a pre created index?

I know it's possible with import createIndex=false

Would export/import be just as good?

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/627108aa-8970-474d-bf5d-7aa3c3c4be73%40googlegroups.com
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAKdsXoGcsTNcJWK%3DhgRehUFD4ip17JfCaqQ3DJ4xJThq3XLgWg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Jorg,

Not sure what you mean. There is a flag: "createIndex=false" which means :

if the index already exists do not try to create it ie it is pre-created.

Import will handle this. Will _push also ?

I have another question which affects me:
I was hoping that "_push" would write to the index without using an
intermediate file. But it seems behind the scenes it uses the filesystem
like export/import. Can you confirm?

Regards,

On Sunday, October 19, 2014 9:14:57 PM UTC+1, Jörg Prante wrote:

I never thought about something like "pre-creation" because it would just
double the existing create index action...

Jörg

On Sun, Oct 19, 2014 at 6:00 PM, <eune...@gmail.com <javascript:>> wrote:

OK I can try that
But is there an option in the _push to have a pre created index?

I know it's possible with import createIndex=false

Would export/import be just as good?

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearc...@googlegroups.com <javascript:>.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/627108aa-8970-474d-bf5d-7aa3c3c4be73%40googlegroups.com
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/430ea089-1a58-4855-a201-19c4281073fd%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

There is no more parameter "createIndex", the documentation is outdated -
sorry for the confusion.

The "_push" action does not use files. There is no need to do that, this
would be very strange,

Jörg

On Mon, Oct 20, 2014 at 5:12 PM, eunever32@gmail.com wrote:

Jorg,

Not sure what you mean. There is a flag: "createIndex=false" which means :

if the index already exists do not try to create it ie it is pre-created.

Import will handle this. Will _push also ?

I have another question which affects me:
I was hoping that "_push" would write to the index without using an
intermediate file. But it seems behind the scenes it uses the filesystem
like export/import. Can you confirm?

Regards,

On Sunday, October 19, 2014 9:14:57 PM UTC+1, Jörg Prante wrote:

I never thought about something like "pre-creation" because it would
just double the existing create index action...

Jörg

On Sun, Oct 19, 2014 at 6:00 PM, eune...@gmail.com wrote:

OK I can try that
But is there an option in the _push to have a pre created index?

I know it's possible with import createIndex=false

Would export/import be just as good?

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearc...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/
msgid/elasticsearch/627108aa-8970-474d-bf5d-7aa3c3c4be73%
40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/430ea089-1a58-4855-a201-19c4281073fd%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/430ea089-1a58-4855-a201-19c4281073fd%40googlegroups.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAKdsXoH92bkAf%3DWaQNfG4Nua2r24HkbX3TkBedXQ5fHz6z1zjA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

So just to explain what I want:

  • I want to be able to "push" an existing index to another index which
    has new mappings

Is this possible?

Preferably it wouldn't go through an intermediate file-system file: that
would be expensive and might not be enough disk available.

Thanks.
On Monday, October 20, 2014 4:16:55 PM UTC+1, Jörg Prante wrote:

There is no more parameter "createIndex", the documentation is outdated -
sorry for the confusion.

The "_push" action does not use files. There is no need to do that, this
would be very strange,

Jörg

On Mon, Oct 20, 2014 at 5:12 PM, <eune...@gmail.com <javascript:>> wrote:

Jorg,

Not sure what you mean. There is a flag: "createIndex=false" which means
:

if the index already exists do not try to create it ie it is pre-created.

Import will handle this. Will _push also ?

I have another question which affects me:
I was hoping that "_push" would write to the index without using an
intermediate file. But it seems behind the scenes it uses the filesystem
like export/import. Can you confirm?

Regards,

On Sunday, October 19, 2014 9:14:57 PM UTC+1, Jörg Prante wrote:

I never thought about something like "pre-creation" because it would
just double the existing create index action...

Jörg

On Sun, Oct 19, 2014 at 6:00 PM, eune...@gmail.com wrote:

OK I can try that
But is there an option in the _push to have a pre created index?

I know it's possible with import createIndex=false

Would export/import be just as good?

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearc...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/
msgid/elasticsearch/627108aa-8970-474d-bf5d-7aa3c3c4be73%
40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearc...@googlegroups.com <javascript:>.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/430ea089-1a58-4855-a201-19c4281073fd%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/430ea089-1a58-4855-a201-19c4281073fd%40googlegroups.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/87f3057e-7c89-417f-900a-3e6b2f10ffd6%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

The recipe is something like this

  1. install knapsack

  2. create new index. Example

curl -XPUT 'localhost:9200/newindex'

  1. create new mappings

curl -XPUT 'localhost:9200/newindex/newmapping/_mapping' -d '{ ... }'

  1. copy data

curl -XPOST
'localhost:9200/oldindex/oldmapping/_push?map={"oldindex/oldmapping":"newindex/newmapping"}'

Jörg

On Mon, Oct 20, 2014 at 5:26 PM, eunever32@gmail.com wrote:

So just to explain what I want:

  • I want to be able to "push" an existing index to another index which
    has new mappings

Is this possible?

Preferably it wouldn't go through an intermediate file-system file: that
would be expensive and might not be enough disk available.

Thanks.
On Monday, October 20, 2014 4:16:55 PM UTC+1, Jörg Prante wrote:

There is no more parameter "createIndex", the documentation is outdated -
sorry for the confusion.

The "_push" action does not use files. There is no need to do that, this
would be very strange,

Jörg

On Mon, Oct 20, 2014 at 5:12 PM, eune...@gmail.com wrote:

Jorg,

Not sure what you mean. There is a flag: "createIndex=false" which means
:

if the index already exists do not try to create it ie it is pre-created.

Import will handle this. Will _push also ?

I have another question which affects me:
I was hoping that "_push" would write to the index without using an
intermediate file. But it seems behind the scenes it uses the filesystem
like export/import. Can you confirm?

Regards,

On Sunday, October 19, 2014 9:14:57 PM UTC+1, Jörg Prante wrote:

I never thought about something like "pre-creation" because it would
just double the existing create index action...

Jörg

On Sun, Oct 19, 2014 at 6:00 PM, eune...@gmail.com wrote:

OK I can try that
But is there an option in the _push to have a pre created index?

I know it's possible with import createIndex=false

Would export/import be just as good?

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearc...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/
msgid/elasticsearch/627108aa-8970-474d-bf5d-7aa3c3c4be73%40goo
glegroups.com.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearc...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/
msgid/elasticsearch/430ea089-1a58-4855-a201-19c4281073fd%
40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/430ea089-1a58-4855-a201-19c4281073fd%40googlegroups.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/87f3057e-7c89-417f-900a-3e6b2f10ffd6%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/87f3057e-7c89-417f-900a-3e6b2f10ffd6%40googlegroups.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAKdsXoE1a7KWm5BHGAGn3BbshKwKYL7RLzTV7unjJ%3D4RnknK%3Dw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Okay when I try that I get this error.
It's always at byte 48
Thanks in advance

Caused by: java.lang.IndexOutOfBoundsException: Readable byte limit
exceeded: 48
at
org.elasticsearch.common.netty.buffer.AbstractChannelBuffer.readByte(AbstractChannelBuffer.java:236)
at
org.elasticsearch.transport.netty.ChannelBufferStreamInput.readByte(ChannelBufferStreamInput.java:132)
at
org.elasticsearch.common.io.stream.StreamInput.readVInt(StreamInput.java:141)
at
org.elasticsearch.common.io.stream.StreamInput.readString(StreamInput.java:272)
at
org.elasticsearch.common.io.stream.HandlesStreamInput.readString(HandlesStreamInput.java:61)
at
org.elasticsearch.common.io.stream.StreamInput.readStringArray(StreamInput.java:362)
at
org.elasticsearch.action.admin.cluster.state.ClusterStateRequest.readFrom(ClusterStateRequest.java:132)
at
org.elasticsearch.transport.netty.MessageChannelHandler.handleRequest(MessageChannelHandler.java:209)
at
org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:109)
at
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:296)
at
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
at
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
at
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
at
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at
org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(OpenChannelsHandler.java:74)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at
org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at
org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)

On Monday, October 20, 2014 4:35:36 PM UTC+1, Jörg Prante wrote:

The recipe is something like this

  1. install knapsack

  2. create new index. Example

curl -XPUT 'localhost:9200/newindex'

  1. create new mappings

curl -XPUT 'localhost:9200/newindex/newmapping/_mapping' -d '{ ... }'

  1. copy data

curl -XPOST
'localhost:9200/oldindex/oldmapping/_push?map={"oldindex/oldmapping":"newindex/newmapping"}'

Jörg

On Mon, Oct 20, 2014 at 5:26 PM, <eune...@gmail.com <javascript:>> wrote:

So just to explain what I want:

  • I want to be able to "push" an existing index to another index
    which has new mappings

Is this possible?

Preferably it wouldn't go through an intermediate file-system file: that
would be expensive and might not be enough disk available.

Thanks.
On Monday, October 20, 2014 4:16:55 PM UTC+1, Jörg Prante wrote:

There is no more parameter "createIndex", the documentation is outdated

  • sorry for the confusion.

The "_push" action does not use files. There is no need to do that, this
would be very strange,

Jörg

On Mon, Oct 20, 2014 at 5:12 PM, eune...@gmail.com wrote:

Jorg,

Not sure what you mean. There is a flag: "createIndex=false" which
means :

if the index already exists do not try to create it ie it is
pre-created.

Import will handle this. Will _push also ?

I have another question which affects me:
I was hoping that "_push" would write to the index without using an
intermediate file. But it seems behind the scenes it uses the filesystem
like export/import. Can you confirm?

Regards,

On Sunday, October 19, 2014 9:14:57 PM UTC+1, Jörg Prante wrote:

I never thought about something like "pre-creation" because it would
just double the existing create index action...

Jörg

On Sun, Oct 19, 2014 at 6:00 PM, eune...@gmail.com wrote:

OK I can try that
But is there an option in the _push to have a pre created index?

I know it's possible with import createIndex=false

Would export/import be just as good?

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it,
send an email to elasticsearc...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/
msgid/elasticsearch/627108aa-8970-474d-bf5d-7aa3c3c4be73%40goo
glegroups.com.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearc...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/
msgid/elasticsearch/430ea089-1a58-4855-a201-19c4281073fd%
40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/430ea089-1a58-4855-a201-19c4281073fd%40googlegroups.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearc...@googlegroups.com <javascript:>.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/87f3057e-7c89-417f-900a-3e6b2f10ffd6%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/87f3057e-7c89-417f-900a-3e6b2f10ffd6%40googlegroups.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/e256eb62-aefe-4d93-a0a6-2cc76f75a25a%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

By the way
Es version 1.3.4
Knapsack version built with 1.3.4

Regards.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/e69c6778-cbc5-4e56-bf71-9bac56b66942%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

I admit there is something overcautious in the knapsack release to prevent
overwriting existing data. I will add a fix that will allow writing into an
empty index.

Jörg

On Mon, Oct 20, 2014 at 6:47 PM, eunever32@gmail.com wrote:

By the way
Es version 1.3.4
Knapsack version built with 1.3.4

Regards.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/e69c6778-cbc5-4e56-bf71-9bac56b66942%40googlegroups.com
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAKdsXoFxLmO84ei%3DHFWJDsPKdM_nYvMuKV-V917Xd2_t%2BiGtPw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Jorg,

Thanks for the quick turnaround on putting in the fix.

What I found when I tested is that it works for test, testcopy

But when I try with myindex, myindexcopy doesn't work

I noticed in the logs when I was trying "myindex" that it was looking for
an index "test" which was a bit odd

So I copied my "myindex" to an index named literally "test" and only then
it worked
So the only index that can be copied is "test"
The target index can be anything.

Logs:

[2014-10-22 12:05:07,649][INFO ][KnapsackPushAction ] start of push:
{"mode":"push","started":"2014-10-22T11:05:07.648Z","node_name":"Pathway"}
[2014-10-22 12:05:07,649][INFO ][KnapsackService ] update cluster
settings: plugin.knapsack.export.state ->
[{"mode":"push","started":"2014-10-22T11:05:07.648Z","node_name":"Pathway"}]
[2014-10-22 12:05:07,650][INFO ][KnapsackPushAction ]
map={myindex=myindexcopy}
[2014-10-22 12:05:07,650][INFO ][KnapsackPushAction ] getting
settings for indices [test, myindex]
[2014-10-22 12:05:07,651][INFO ][KnapsackPushAction ] found indices:
[test, myindex]
[2014-10-22 12:05:07,652][INFO ][KnapsackPushAction ] getting
mappings for index test and types []
[2014-10-22 12:05:07,652][INFO ][KnapsackPushAction ] found mappings:
[test]
[2014-10-22 12:05:07,653][INFO ][KnapsackPushAction ] adding mapping:
test
[2014-10-22 12:05:07,653][INFO ][KnapsackPushAction ] creating index:
test
[2014-10-22 12:05:07,672][INFO ][KnapsackPushAction ] count=2
status=OK

I guess you can put in a quick fix?

I would have to ask if anyone is using this?

And what are most people doing? Are there any plans by "ES" to create a
product or does the snapshot feature suffice for most people?

Again I just would repeat my requirements: I want to change the mapping
types for an existing index. Therefore I create my new index and copy the
old index data into the new.

Thanks in advance.

On Monday, October 20, 2014 8:42:48 PM UTC+1, Jörg Prante wrote:

I admit there is something overcautious in the knapsack release to prevent
overwriting existing data. I will add a fix that will allow writing into an
empty index.

https://github.com/jprante/elasticsearch-knapsack/issues/57

Jörg

On Mon, Oct 20, 2014 at 6:47 PM, <eune...@gmail.com <javascript:>> wrote:

By the way
Es version 1.3.4
Knapsack version built with 1.3.4

Regards.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearc...@googlegroups.com <javascript:>.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/e69c6778-cbc5-4e56-bf71-9bac56b66942%40googlegroups.com
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/2ff794cd-c1bf-463f-81f1-ce9a20da3b6e%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Yes, I can put up a fix - looks weird.

Most users have either a constant mapping that can extend dynamically, or
does not change on existing field.

If fields have to change for future documents, you can also change mapping
by using alias technique:

  • old index with old fields (no change)

  • new index created with changed fields

  • assigning an index alias to both indices

  • search on index alias

No copy required.

Jörg

On Wed, Oct 22, 2014 at 1:27 PM, eunever32@gmail.com wrote:

Jorg,

Thanks for the quick turnaround on putting in the fix.

What I found when I tested is that it works for test, testcopy

But when I try with myindex, myindexcopy doesn't work

I noticed in the logs when I was trying "myindex" that it was looking for
an index "test" which was a bit odd

So I copied my "myindex" to an index named literally "test" and only then
it worked
So the only index that can be copied is "test"
The target index can be anything.

Logs:

[2014-10-22 12:05:07,649][INFO ][KnapsackPushAction ] start of push:
{"mode":"push","started":"2014-10-22T11:05:07.648Z","node_name":"Pathway"}
[2014-10-22 12:05:07,649][INFO ][KnapsackService ] update cluster
settings: plugin.knapsack.export.state ->
[{"mode":"push","started":"2014-10-22T11:05:07.648Z","node_name":"Pathway"}]
[2014-10-22 12:05:07,650][INFO ][KnapsackPushAction ]
map={myindex=myindexcopy}
[2014-10-22 12:05:07,650][INFO ][KnapsackPushAction ] getting
settings for indices [test, myindex]
[2014-10-22 12:05:07,651][INFO ][KnapsackPushAction ] found indices:
[test, myindex]
[2014-10-22 12:05:07,652][INFO ][KnapsackPushAction ] getting
mappings for index test and types []
[2014-10-22 12:05:07,652][INFO ][KnapsackPushAction ] found
mappings: [test]
[2014-10-22 12:05:07,653][INFO ][KnapsackPushAction ] adding
mapping: test
[2014-10-22 12:05:07,653][INFO ][KnapsackPushAction ] creating
index: test
[2014-10-22 12:05:07,672][INFO ][KnapsackPushAction ] count=2
status=OK

I guess you can put in a quick fix?

I would have to ask if anyone is using this?

And what are most people doing? Are there any plans by "ES" to create a
product or does the snapshot feature suffice for most people?

Again I just would repeat my requirements: I want to change the mapping
types for an existing index. Therefore I create my new index and copy the
old index data into the new.

Thanks in advance.

On Monday, October 20, 2014 8:42:48 PM UTC+1, Jörg Prante wrote:

I admit there is something overcautious in the knapsack release to
prevent overwriting existing data. I will add a fix that will allow writing
into an empty index.

https://github.com/jprante/elasticsearch-knapsack/issues/57

Jörg

On Mon, Oct 20, 2014 at 6:47 PM, eune...@gmail.com wrote:

By the way
Es version 1.3.4
Knapsack version built with 1.3.4

Regards.

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearc...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/
msgid/elasticsearch/e69c6778-cbc5-4e56-bf71-9bac56b66942%
40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/2ff794cd-c1bf-463f-81f1-ce9a20da3b6e%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/2ff794cd-c1bf-463f-81f1-ce9a20da3b6e%40googlegroups.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAKdsXoFR_0-%3DOt%3DsY4Y4tt%3D0quh8-%3D7zEBVjAHAKZGppkAuRFA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

I think you have to set up such a curl command like this

curl -XPOST
'localhost:9200/yourindex/_push?map={"yourindex":"yournewindex"}'

to push the index "yourindex" to another one. Note the endpoint.

How does your curl look like?

Jörg

On Wed, Oct 22, 2014 at 1:27 PM, eunever32@gmail.com wrote:

Jorg,

Thanks for the quick turnaround on putting in the fix.

What I found when I tested is that it works for test, testcopy

But when I try with myindex, myindexcopy doesn't work

I noticed in the logs when I was trying "myindex" that it was looking for
an index "test" which was a bit odd

So I copied my "myindex" to an index named literally "test" and only then
it worked
So the only index that can be copied is "test"
The target index can be anything.

Logs:

[2014-10-22 12:05:07,649][INFO ][KnapsackPushAction ] start of push:
{"mode":"push","started":"2014-10-22T11:05:07.648Z","node_name":"Pathway"}
[2014-10-22 12:05:07,649][INFO ][KnapsackService ] update cluster
settings: plugin.knapsack.export.state ->
[{"mode":"push","started":"2014-10-22T11:05:07.648Z","node_name":"Pathway"}]
[2014-10-22 12:05:07,650][INFO ][KnapsackPushAction ]
map={myindex=myindexcopy}
[2014-10-22 12:05:07,650][INFO ][KnapsackPushAction ] getting
settings for indices [test, myindex]
[2014-10-22 12:05:07,651][INFO ][KnapsackPushAction ] found indices:
[test, myindex]
[2014-10-22 12:05:07,652][INFO ][KnapsackPushAction ] getting
mappings for index test and types []
[2014-10-22 12:05:07,652][INFO ][KnapsackPushAction ] found
mappings: [test]
[2014-10-22 12:05:07,653][INFO ][KnapsackPushAction ] adding
mapping: test
[2014-10-22 12:05:07,653][INFO ][KnapsackPushAction ] creating
index: test
[2014-10-22 12:05:07,672][INFO ][KnapsackPushAction ] count=2
status=OK

I guess you can put in a quick fix?

I would have to ask if anyone is using this?

And what are most people doing? Are there any plans by "ES" to create a
product or does the snapshot feature suffice for most people?

Again I just would repeat my requirements: I want to change the mapping
types for an existing index. Therefore I create my new index and copy the
old index data into the new.

Thanks in advance.

On Monday, October 20, 2014 8:42:48 PM UTC+1, Jörg Prante wrote:

I admit there is something overcautious in the knapsack release to
prevent overwriting existing data. I will add a fix that will allow writing
into an empty index.

https://github.com/jprante/elasticsearch-knapsack/issues/57

Jörg

On Mon, Oct 20, 2014 at 6:47 PM, eune...@gmail.com wrote:

By the way
Es version 1.3.4
Knapsack version built with 1.3.4

Regards.

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearc...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/
msgid/elasticsearch/e69c6778-cbc5-4e56-bf71-9bac56b66942%
40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/2ff794cd-c1bf-463f-81f1-ce9a20da3b6e%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/2ff794cd-c1bf-463f-81f1-ce9a20da3b6e%40googlegroups.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAKdsXoH5G4xZxCTHVK-jTjKidMUKOpyNpjwvx-PzQ5xcK2SVZA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Hey Jorg,

Correct. Whew!

If I run just curl -XPOST
'localhost:9200/_push?map={"myindex":"myindexcopy"}'

it works fine.

By the way : is there any way to make this work in "sense" eg
POST /_push?map={"myindex":"myindexcopy"}
POST /_push
{
"map": {
""myindex":"myindexcopy"
}
}

The second one will submit in "sense" but results in empty map={}

And is there any plan to put a gui around it?

Aside: I still see these errors in the ES logs

[2014-10-22 13:46:25,736][INFO ][client.transport ] [Astronomer]
failed to get local cluster state for [#transport#-2][HDQWK037][inet[/10.193
org.elasticsearch.transport.RemoteTransportException: [Abigail
Brand][inet[/10.193.5.155:9301]][cluster/state]
Caused by: org.elasticsearch.transport.RemoteTransportException: [Abigail
Brand][inet[/10.193.5.155:9301]][cluster/state]
Caused by: java.lang.IndexOutOfBoundsException: Readable byte limit
exceeded: 48
at
org.elasticsearch.common.netty.buffer.AbstractChannelBuffer.readByte(AbstractChannelBuffer.java:236)
at
org.elasticsearch.transport.netty.ChannelBufferStreamInput.readByte(ChannelBufferStreamInput.java:132)
at
org.elasticsearch.common.io.stream.StreamInput.readVInt(StreamInput.java:141)
at
org.elasticsearch.common.io.stream.StreamInput.readString(StreamInput.java:272)
at
org.elasticsearch.common.io.stream.HandlesStreamInput.readString(HandlesStreamInput.java:61)
at
org.elasticsearch.common.io.stream.StreamInput.readStringArray(StreamInput.java:362)
at
org.elasticsearch.action.admin.cluster.state.ClusterStateRequest.readFrom(ClusterStateRequest.java:132)
at
org.elasticsearch.transport.netty.MessageChannelHandler.handleRequest(MessageChannelHandler.java:209)
at
org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:109)
at
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:296)
at
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
at
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
at
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
at
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at
org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(OpenChannelsHandler.java:74)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at
org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at
org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)

On Wednesday, October 22, 2014 1:27:59 PM UTC+1, Jörg Prante wrote:

I think you have to set up such a curl command like this

curl -XPOST
'localhost:9200/yourindex/_push?map={"yourindex":"yournewindex"}'

to push the index "yourindex" to another one. Note the endpoint.

How does your curl look like?

Jörg

On Wed, Oct 22, 2014 at 1:27 PM, <eune...@gmail.com <javascript:>> wrote:

Jorg,

Thanks for the quick turnaround on putting in the fix.

What I found when I tested is that it works for test, testcopy

But when I try with myindex, myindexcopy doesn't work

I noticed in the logs when I was trying "myindex" that it was looking for
an index "test" which was a bit odd

So I copied my "myindex" to an index named literally "test" and only
then it worked
So the only index that can be copied is "test"
The target index can be anything.

Logs:

[2014-10-22 12:05:07,649][INFO ][KnapsackPushAction ] start of
push:
{"mode":"push","started":"2014-10-22T11:05:07.648Z","node_name":"Pathway"}
[2014-10-22 12:05:07,649][INFO ][KnapsackService ] update
cluster settings: plugin.knapsack.export.state ->
[{"mode":"push","started":"2014-10-22T11:05:07.648Z","node_name":"Pathway"}]
[2014-10-22 12:05:07,650][INFO ][KnapsackPushAction ]
map={myindex=myindexcopy}
[2014-10-22 12:05:07,650][INFO ][KnapsackPushAction ] getting
settings for indices [test, myindex]
[2014-10-22 12:05:07,651][INFO ][KnapsackPushAction ] found
indices: [test, myindex]
[2014-10-22 12:05:07,652][INFO ][KnapsackPushAction ] getting
mappings for index test and types []
[2014-10-22 12:05:07,652][INFO ][KnapsackPushAction ] found
mappings: [test]
[2014-10-22 12:05:07,653][INFO ][KnapsackPushAction ] adding
mapping: test
[2014-10-22 12:05:07,653][INFO ][KnapsackPushAction ] creating
index: test
[2014-10-22 12:05:07,672][INFO ][KnapsackPushAction ] count=2
status=OK

I guess you can put in a quick fix?

I would have to ask if anyone is using this?

And what are most people doing? Are there any plans by "ES" to create a
product or does the snapshot feature suffice for most people?

Again I just would repeat my requirements: I want to change the mapping
types for an existing index. Therefore I create my new index and copy the
old index data into the new.

Thanks in advance.

On Monday, October 20, 2014 8:42:48 PM UTC+1, Jörg Prante wrote:

I admit there is something overcautious in the knapsack release to
prevent overwriting existing data. I will add a fix that will allow writing
into an empty index.

https://github.com/jprante/elasticsearch-knapsack/issues/57

Jörg

On Mon, Oct 20, 2014 at 6:47 PM, eune...@gmail.com wrote:

By the way
Es version 1.3.4
Knapsack version built with 1.3.4

Regards.

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearc...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/
msgid/elasticsearch/e69c6778-cbc5-4e56-bf71-9bac56b66942%
40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearc...@googlegroups.com <javascript:>.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/2ff794cd-c1bf-463f-81f1-ce9a20da3b6e%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/2ff794cd-c1bf-463f-81f1-ce9a20da3b6e%40googlegroups.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/d92a58b0-4a3c-4d29-8e5b-50f382a93092%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

I can not use the HTTP request body because this is reserved for a search
request like in the _search endpoint. So you can push a part of the index
to a new index (the search hits).

The message "failed to get local cluster state for" is on INFO level, so I
think it is not an error.

A GUI is a long term project in another context, good for the whole
community. I am unsure how to develop a replacement for the sense plugin.
Maybe a firefox plugin will arrive some time. I don't know.

Jörg

On Wed, Oct 22, 2014 at 3:21 PM, eunever32@gmail.com wrote:

Hey Jorg,

Correct. Whew!

If I run just curl -XPOST 'localhost:9200/_push?map={"myindex":"
myindexcopy"}'

it works fine.

By the way : is there any way to make this work in "sense" eg
POST /_push?map={"myindex":"myindexcopy"}
POST /_push
{
"map": {
""myindex":"myindexcopy"
}
}

The second one will submit in "sense" but results in empty map={}

And is there any plan to put a gui around it?

Aside: I still see these errors in the ES logs

[2014-10-22 13:46:25,736][INFO ][client.transport ] [Astronomer]
failed to get local cluster state for [#transport#-2][HDQWK037][inet[/10.193
org.elasticsearch.transport.RemoteTransportException: [Abigail
Brand][inet[/10.193.5.155:9301]][cluster/state]
Caused by: org.elasticsearch.transport.RemoteTransportException: [Abigail
Brand][inet[/10.193.5.155:9301]][cluster/state]

Caused by: java.lang.IndexOutOfBoundsException: Readable byte limit
exceeded: 48
at
org.elasticsearch.common.netty.buffer.AbstractChannelBuffer.readByte(AbstractChannelBuffer.java:236)
at
org.elasticsearch.transport.netty.ChannelBufferStreamInput.readByte(ChannelBufferStreamInput.java:132)
at
org.elasticsearch.common.io.stream.StreamInput.readVInt(StreamInput.java:141)
at
org.elasticsearch.common.io.stream.StreamInput.readString(StreamInput.java:272)
at
org.elasticsearch.common.io.stream.HandlesStreamInput.readString(HandlesStreamInput.java:61)
at
org.elasticsearch.common.io.stream.StreamInput.readStringArray(StreamInput.java:362)
at
org.elasticsearch.action.admin.cluster.state.ClusterStateRequest.readFrom(ClusterStateRequest.java:132)
at
org.elasticsearch.transport.netty.MessageChannelHandler.handleRequest(MessageChannelHandler.java:209)
at
org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:109)
at
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:296)
at
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
at
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
at
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
at
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at
org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(OpenChannelsHandler.java:74)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at
org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at
org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)

On Wednesday, October 22, 2014 1:27:59 PM UTC+1, Jörg Prante wrote:

I think you have to set up such a curl command like this

curl -XPOST 'localhost:9200/yourindex/_push?map={"yourindex":"
yournewindex"}'

to push the index "yourindex" to another one. Note the endpoint.

How does your curl look like?

Jörg

On Wed, Oct 22, 2014 at 1:27 PM, eune...@gmail.com wrote:

Jorg,

Thanks for the quick turnaround on putting in the fix.

What I found when I tested is that it works for test, testcopy

But when I try with myindex, myindexcopy doesn't work

I noticed in the logs when I was trying "myindex" that it was looking
for an index "test" which was a bit odd

So I copied my "myindex" to an index named literally "test" and only
then it worked
So the only index that can be copied is "test"
The target index can be anything.

Logs:

[2014-10-22 12:05:07,649][INFO ][KnapsackPushAction ] start of
push: {"mode":"push","started":"2014-10-22T11:05:07.648Z","
node_name":"Pathway"}
[2014-10-22 12:05:07,649][INFO ][KnapsackService ] update
cluster settings: plugin.knapsack.export.state ->
[{"mode":"push","started":"2014-10-22T11:05:07.648Z","
node_name":"Pathway"}]
[2014-10-22 12:05:07,650][INFO ][KnapsackPushAction ]
map={myindex=myindexcopy}
[2014-10-22 12:05:07,650][INFO ][KnapsackPushAction ] getting
settings for indices [test, myindex]
[2014-10-22 12:05:07,651][INFO ][KnapsackPushAction ] found
indices: [test, myindex]
[2014-10-22 12:05:07,652][INFO ][KnapsackPushAction ] getting
mappings for index test and types []
[2014-10-22 12:05:07,652][INFO ][KnapsackPushAction ] found
mappings: [test]
[2014-10-22 12:05:07,653][INFO ][KnapsackPushAction ] adding
mapping: test
[2014-10-22 12:05:07,653][INFO ][KnapsackPushAction ] creating
index: test
[2014-10-22 12:05:07,672][INFO ][KnapsackPushAction ] count=2
status=OK

I guess you can put in a quick fix?

I would have to ask if anyone is using this?

And what are most people doing? Are there any plans by "ES" to create a
product or does the snapshot feature suffice for most people?

Again I just would repeat my requirements: I want to change the mapping
types for an existing index. Therefore I create my new index and copy the
old index data into the new.

Thanks in advance.

On Monday, October 20, 2014 8:42:48 PM UTC+1, Jörg Prante wrote:

I admit there is something overcautious in the knapsack release to
prevent overwriting existing data. I will add a fix that will allow writing
into an empty index.

https://github.com/jprante/elasticsearch-knapsack/issues/57

Jörg

On Mon, Oct 20, 2014 at 6:47 PM, eune...@gmail.com wrote:

By the way
Es version 1.3.4
Knapsack version built with 1.3.4

Regards.

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearc...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/
msgid/elasticsearch/e69c6778-cbc5-4e56-bf71-9bac56b66942%40goo
glegroups.com.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearc...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/
msgid/elasticsearch/2ff794cd-c1bf-463f-81f1-ce9a20da3b6e%
40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/2ff794cd-c1bf-463f-81f1-ce9a20da3b6e%40googlegroups.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/d92a58b0-4a3c-4d29-8e5b-50f382a93092%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/d92a58b0-4a3c-4d29-8e5b-50f382a93092%40googlegroups.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAKdsXoH6Ji7%3DLoSdwZ42BHFgM7dzu_v1WxPkkKtPhYhozO6sLQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.