yes, _source cannot be disabled and also it only works in the same cluster.
But as one could use the code in a pure java application (like I was doing
before) or in a river (like you are proposing in the issue) one can then
reindex into a different cluster too.
Regads,
Peter.
On Tuesday, November 27, 2012 3:58:46 PM UTC+1, David Pilato wrote:
My only concern with the river is that nodes could be incompatible from a
cluster to another one.
That's one of the reason I did not digg into before.
But now, there are some pure REST interfaces and I probaly can use JEST [1] for
example to fetch content from another cluster (I did not check if scan & scroll
API is available from JEST).
Also, it's perhaps a nonsense to consider it as a river and not as an
administrative tool (as you said : in a pure java application).
yes, _source cannot be disabled and also it only works in the same cluster.
But as one could use the code in a pure java application (like I was doing
before) or in a river (like you are proposing in the issue) one can then
reindex into a different cluster too.
Regads,
Peter.
On Tuesday, November 27, 2012 3:58:46 PM UTC+1, David Pilato wrote:
this is a plugin which wraps some 'reindex' functionality and executes
this on the server-side. This could be useful
* if you want to change some index settings which are not updatable
(like shard count etc => reindexing into a new index)
* or if you want to change some type settings (reindexing into the
same index)
* or if you want to copy/update only specific data into another index
=> therefor you can specify a query (default is match_all)
So I assume that I have to use my own pure REST implementation (with SPORE
specification [1]) - but scan & scroll is not written yet.
So I have to wait for... What ? For myself ? WTF
Also the GET request(s) for scroll should be very simple to be 'hacked'
together via a simple JSONObject + Apache client ...
but do you know if it is easy to add those dependencies when writing a
plugin? Or is it some maven magic where I use the the full
"dependencies-jar"?
Regards,
Peter.
On Tuesday, November 27, 2012 4:54:40 PM UTC+1, David Pilato wrote:
Oh. Thanks I was not aware of it.
So I assume that I have to use my own pure REST implementation (with
SPORE specification [1]) - but scan & scroll is not written yet.
So I have to wait for... What ? For myself ? WTF
Also the GET request(s) for scroll should be very simple to be 'hacked'
together via a simple JSONObject + Apache client ...
but do you know if it is easy to add those dependencies when writing a
plugin? Or is it some maven magic where I use the the full "dependencies-jar"?
Regards,
Peter.
On Tuesday, November 27, 2012 4:54:40 PM UTC+1, David Pilato wrote:
Oh. Thanks I was not aware of it.
So I assume that I have to use my own pure REST implementation (with
SPORE specification [1]) - but scan & scroll is not written yet.
So I have to wait for... What ? For myself ? WTF
please :) !
> My only concern with the river is that nodes could be incompatible
> from a cluster to another one.
hmmh, indeed a valid concern. but how would you add Jest to the
instance which hosts the plugin?
Jest uses elasticsearch under the hood (why?)! See this discussion:
I've implemented the external cluster thing (for simplicity just with
JSONObject and HttpClient, not sure if it is ok regarding performance/IO).
So if you specify searchHost then this more expensive variation will be
used.
The cool thing is that I can now grab data from production servers into my
local box (when making the port public for this short time). I also
introduced a waitInSeconds parameter to avoid high load. Warning: the call
is not yet async and stopable etc (except you shutdown the server) ...
probably I should move to the river stuff ... or I'll leave this task for
the reader
Regards,
Peter.
On Tuesday, November 27, 2012 6:13:15 PM UTC+1, David Pilato wrote:
Le 27 novembre 2012 à 17:45, Karussell <tabley...@gmail.com <javascript:>>
a écrit :
is there a Java implementation for SPORE?
Also the GET request(s) for scroll should be very simple to be 'hacked'
together via a simple JSONObject + Apache client ...
but do you know if it is easy to add those dependencies when writing a
plugin? Or is it some maven magic where I use the the full
"dependencies-jar"?
Regards,
Peter.
On Tuesday, November 27, 2012 4:54:40 PM UTC+1, David Pilato wrote:
Oh. Thanks I was not aware of it.
So I assume that I have to use my own pure REST implementation (with
SPORE specification [1]) - but scan & scroll is not written yet.
So I have to wait for... What ? For myself ? WTF
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.