My goal is to have a search engine service which is reliable and highly
available. If one of the server goes down, users should still be able to
search, new documents should still be indexed, if there is an update to a
document, update should be indexed, deletion of document should be updated
in the index, etc. I want to avoid single point of failure.
I would like to know whether my goal above is achievable using
ElasticSearch.
Currently I have configured two instances of the ElasticSearch service
running on two different machine with the below configuration changes from
the default elasticsearch.yml file.
Is the above configuration sufficient to achieve my above goal?
Would the index files gets auto synced between the two machines?
If one of the server/service, say server2 goes down, would the search
engine still function using server1?
What happens when the server2 gets back online? Would the index
file from server1 gets synced to server2?
Let's say server1 is up, sever2 is down for about 30 minutes. During
this time some index files were modified on server1. What happens if the
server1 was shut down while server2 was down, and if we bring back server2
at first and server1 later? Would the updated index from server1 will get
pushed to server2?
I hope my questions above made sense to you. I am kind of bit confused with
the terms node, cluster and gateway. My guess is that the above two
services running on server1 and serve2 are two different nodes.
I am planning to connect to these services using the Java code below.
Client client = new TransportClient()
.addTransportAddress(new
My goal is to have a search engine service which is reliable and highly available. If one of the server goes down, users should still be able to search, new documents should still be indexed, if there is an update to a document, update should be indexed, deletion of document should be updated in the index, etc. I want to avoid single point of failure.
I would like to know whether my goal above is achievable using Elasticsearch.
Currently I have configured two instances of the Elasticsearch service running on two different machine with the below configuration changes from the default elasticsearch.yml file.
node.name: "server2"
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["149.59.34.195:9300", "149.59.96.99:9300"]
Is the above configuration sufficient to achieve my above goal?
Would the index files gets auto synced between the two machines?
If one of the server/service, say server2 goes down, would the search engine still function using server1?
What happens when the server2 gets back online? Would the index file from server1 gets synced to server2?
Let's say server1 is up, sever2 is down for about 30 minutes. During this time some index files were modified on server1. What happens if the server1 was shut down while server2 was down, and if we bring back server2 at first and server1 later? Would the updated index from server1 will get pushed to server2?
I hope my questions above made sense to you. I am kind of bit confused with the terms node, cluster and gateway. My guess is that the above two services running on server1 and serve2 are two different nodes.
I am planning to connect to these services using the Java code below.
--
David
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs
Le 7 nov. 2012 à 23:01, Mxims <renji...@gmail.com <javascript:>> a écrit :
My goal is to have a search engine service which is reliable and highly
available. If one of the server goes down, users should still be able to
search, new documents should still be indexed, if there is an update to a
document, update should be indexed, deletion of document should be updated
in the index, etc. I want to avoid single point of failure.
I would like to know whether my goal above is achievable using
Elasticsearch.
Currently I have configured two instances of the Elasticsearch service
running on two different machine with the below configuration changes from
the default elasticsearch.yml file.
Is the above configuration sufficient to achieve my above goal?
Would the index files gets auto synced between the two machines?
If one of the server/service, say server2 goes down, would the
search engine still function using server1?
What happens when the server2 gets back online? Would the index
file from server1 gets synced to server2?
Let's say server1 is up, sever2 is down for about 30 minutes. During
this time some index files were modified on server1. What happens if the
server1 was shut down while server2 was down, and if we bring back server2
at first and server1 later? Would the updated index from server1 will get
pushed to server2?
I hope my questions above made sense to you. I am kind of bit confused
with the terms node, cluster and gateway. My guess is that the above two
services running on server1 and serve2 are two different nodes.
I am planning to connect to these services using the Java code below.
Client client = new TransportClient()
.addTransportAddress(new
My goal is to have a search engine service which is reliable and highly available. If one of the server goes down, users should still be able to search, new documents should still be indexed, if there is an update to a document, update should be indexed, deletion of document should be updated in the index, etc. I want to avoid single point of failure.
I would like to know whether my goal above is achievable using Elasticsearch.
Currently I have configured two instances of the Elasticsearch service running on two different machine with the below configuration changes from the default elasticsearch.yml file.
node.name: "server2"
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["149.59.34.195:9300", "149.59.96.99:9300"]
Is the above configuration sufficient to achieve my above goal?
Would the index files gets auto synced between the two machines?
If one of the server/service, say server2 goes down, would the search engine still function using server1?
What happens when the server2 gets back online? Would the index file from server1 gets synced to server2?
Let's say server1 is up, sever2 is down for about 30 minutes. During this time some index files were modified on server1. What happens if the server1 was shut down while server2 was down, and if we bring back server2 at first and server1 later? Would the updated index from server1 will get pushed to server2?
I hope my questions above made sense to you. I am kind of bit confused with the terms node, cluster and gateway. My guess is that the above two services running on server1 and serve2 are two different nodes.
I am planning to connect to these services using the Java code below.
I'm with the same scenario over here with the little difference that I'm setting the network.publish_host with the machine ip.
But I was making some tests and one of them was to unplug the lan cable from one machine and plug it again to see if the cluster will discovery the nodes again. But it didn't happen, when I do this each one turn into master node and didn't see each other anymore...
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.