The question is, what client are you using out there?
Here at company X we have java applications using elasticsearch. We have
many java applications, different java applications and they use the
transport client. This decision was made for developers, given the ease of
use that the transport client provides. BUT, for upgrading elasticsearch
this is a pain in the ass, because every time we upgrade the cluster, the
transport client has to be upgraded, and this is a maneuver hard to
orchestrate with a cero downtime (we have to redeploy many applications
with the newest transport client).
Is there anyone out there familiar with this situation? what are you using?
I also use the TransportClient. And then I wrap our business rules behind
another server that offers an HTTP REST API but talks to Elasticsearch on
the back end via the TransportClient. This server uses Netty and the LMAX
Disruptor to provide low-resource high-throughput processing; it is
somewhat like Node.js but in Java instead of JavaScript.
Then I have a bevy of command-line maintenance and test tools that also use
the TransportClient. I wrap them inside a shell script (for example,
Foobar.main is wrapped inside foobar.sh) and convert command-line options
(such as -t person) into Java properties (such as TypeName=person), and
also set the classpath to all of the Elasticsearch jars plus all of mine.
Whenever there is a compelling change to Elasticsearch, I upgrade, and many
times I have watched my Java builds fail with all of the breaking changes.
But even with the worst of the breaking changes, it was down for maybe a
day or two at the most; the API is rather clean, and this newsgroup is a
life saver, and so I never got stuck. And when I was done, I had learned
even more about the ES Java API.
So it's either a huge pain or it's the joy of learning, depending on your
point of view. I have always viewed it as the joy of learning.
I just wish the Facets-to-Aggregations migration was smoother. But I sense
that there will be another breaking change on my horizon. This will be
particularly sad for me, as I had implemented a rather nice hierarchical
term frequency combining mvel and facets. Which are now deprecated and on
the list to be removed. But again, I'll learn a lot when making the
migration.
I believe it was Thomas Edison who said that most people miss opportunities
because the opportunities come dressed in overalls and look like work. But
I digress....
Since version 1.0, there should be fewer binary protocol issues between any
nodes, including the clients, making rolling upgrades doable. Older clients
should be able to interact with newer server nodes, but the inverse is not
always the case.
I also use the TransportClient. And then I wrap our business rules behind
another server that offers an HTTP REST API but talks to Elasticsearch on
the back end via the TransportClient. This server uses Netty and the LMAX
Disruptor to provide low-resource high-throughput processing; it is
somewhat like Node.js but in Java instead of JavaScript.
Then I have a bevy of command-line maintenance and test tools that also
use the TransportClient. I wrap them inside a shell script (for example,
Foobar.main is wrapped inside foobar.sh) and convert command-line options
(such as -t person) into Java properties (such as TypeName=person), and
also set the classpath to all of the Elasticsearch jars plus all of mine.
Whenever there is a compelling change to Elasticsearch, I upgrade, and
many times I have watched my Java builds fail with all of the breaking
changes. But even with the worst of the breaking changes, it was down for
maybe a day or two at the most; the API is rather clean, and this newsgroup
is a life saver, and so I never got stuck. And when I was done, I had
learned even more about the ES Java API.
So it's either a huge pain or it's the joy of learning, depending on your
point of view. I have always viewed it as the joy of learning.
I just wish the Facets-to-Aggregations migration was smoother. But I sense
that there will be another breaking change on my horizon. This will be
particularly sad for me, as I had implemented a rather nice hierarchical
term frequency combining mvel and facets. Which are now deprecated and on
the list to be removed. But again, I'll learn a lot when making the
migration.
I believe it was Thomas Edison who said that most people miss
opportunities because the opportunities come dressed in overalls and look
like work. But I digress....
I hope your 2 days of downtime is in a testing environment :). This sounds like something really great to me, and somehow it was also spinning in my skull. Did you make this from bottom to top, or you got some starting point (opensource project or so ...), any place I could look at for more ideas on how to implement it?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.