We are wondering about the best design for a client to address an ES
cluster.
The client app knows only an URL of the loadbalancer like haproxy which
roundrobin request on each nodes (it is the SPOF so you have to handle that)
The client app implement a local elasticsearch beside node without data
that join the active cluster and the app query the local ES. (maybe 2 or 3
main nodes have to be knows to jojn the whole cluster)
The client knows a lists of ES node to address via client implementation
we have to maintain this list on each app when we upgrade the cluster
maybe client can learn from the first accessed node the full list of
nodes available in the cluster
It is possible but to me it seems like extra work that is not necessary (see #2 and #3 as both options are provided by ES out of the box)
This is Node client
This is Transport client
--
Regards,
Lukas
On Tuesday, December 13, 2011 at 4:06 PM, Damien Hardy wrote:
Hello,
We are wondering about the best design for a client to address an ES cluster.
The client app knows only an URL of the loadbalancer like haproxy which roundrobin request on each nodes (it is the SPOF so you have to handle that)
The client app implement a local elasticsearch beside node without data that join the active cluster and the app query the local ES. (maybe 2 or 3 main nodes have to be knows to jojn the whole cluster)
The client knows a lists of ES node to address via client implementation
we have to maintain this list on each app when we upgrade the cluster
maybe client can learn from the first accessed node the full list of nodes available in the cluster
Lukas answered for the Java client case, but I assume you are talking about
the HTTP based case? I like 3, where the client has a list of URLs (there
is a nice discussion about it on Tire (ruby client)), though 2 is a valid
option as well, start an elasticsearch node that holds no data and is not
an eligible master on your local box, and only talk to it. It will join the
cluster and know about the rest of the nodes.
We are wondering about the best design for a client to address an ES
cluster.
The client app knows only an URL of the loadbalancer like haproxy which
roundrobin request on each nodes (it is the SPOF so you have to handle that)
The client app implement a local elasticsearch beside node without data
that join the active cluster and the app query the local ES. (maybe 2 or 3
main nodes have to be knows to jojn the whole cluster)
The client knows a lists of ES node to address via client
implementation
we have to maintain this list on each app when we upgrade the
cluster
maybe client can learn from the first accessed node the full list
of nodes available in the cluster
And yes I mainlly was thinking about non-java centric clients apps
In the third we have olso to deal with defunct/mising ES nodes that is
managed by a beside private ES data-less node ( solution 2 )
Lukas answered for the Java client case, but I assume you are talking
about the HTTP based case? I like 3, where the client has a list of URLs
(there is a nice discussion about it on Tire (ruby client)), though 2 is a
valid option as well, start an elasticsearch node that holds no data and is
not an eligible master on your local box, and only talk to it. It will join
the cluster and know about the rest of the nodes.
We are wondering about the best design for a client to address an ES
cluster.
The client app knows only an URL of the loadbalancer like haproxy
which roundrobin request on each nodes (it is the SPOF so you have to
handle that)
The client app implement a local elasticsearch beside node without
data that join the active cluster and the app query the local ES. (maybe 2
or 3 main nodes have to be knows to jojn the whole cluster)
The client knows a lists of ES node to address via client
implementation
we have to maintain this list on each app when we upgrade the
cluster
maybe client can learn from the first accessed node the full list
of nodes available in the cluster
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.