Can I use ElasticSearch to avoid Single Point of Failure?

My goal is to have a search engine service which is reliable and highly
available. If one of the server goes down, users should still be able to
search, new documents should still be indexed, if there is an update to a
document, update should be indexed, deletion of document should be updated
in the index, etc. I want to avoid single point of failure.

I would like to know whether my goal above is achievable using
ElasticSearch.

Currently I have configured two instances of the ElasticSearch service
running on two different machine with the below configuration changes from
the default elasticsearch.yml file.

elasticsearch.yml on "server1" server.

node.name: "server1"
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["149.59.34.195:9300",
"149.59.96.99:9300"]

elasticsearch.yml on "server2" server.

node.name: "server2"
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["149.59.34.195:9300",
"149.59.96.99:9300"]

  • Is the above configuration sufficient to achieve my above goal?
  • Would the index files gets auto synced between the two machines?
  • If one of the server/service, say server2 goes down, would the search
    engine still function using server1?
    • What happens when the server2 gets back online? Would the index
      file from server1 gets synced to server2?
  • Let's say server1 is up, sever2 is down for about 30 minutes. During
    this time some index files were modified on server1. What happens if the
    server1 was shut down while server2 was down, and if we bring back server2
    at first and server1 later? Would the updated index from server1 will get
    pushed to server2?

I hope my questions above made sense to you. I am kind of bit confused with
the terms node, cluster and gateway. My guess is that the above two
services running on server1 and serve2 are two different nodes.

I am planning to connect to these services using the Java code below.

    Client client = new TransportClient()
    .addTransportAddress(new 

InetSocketTransportAddress("149.59.34.195", 9300))
.addTransportAddress(new InetSocketTransportAddress("149.59.96.99",
9300));

Thanks
Renjith

--

Hi,

In short: YES !

ES is designed for all your concerns.

I suggest that you look at some videos on ES site or at my slides on slideshare: Elasticsearch - Devoxx France 2012 - English version | PPT
The last slides explains how docs are indexed, how searches works and failover feature...

HTH

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 7 nov. 2012 à 23:01, Mxims renjith.ck@gmail.com a écrit :

My goal is to have a search engine service which is reliable and highly available. If one of the server goes down, users should still be able to search, new documents should still be indexed, if there is an update to a document, update should be indexed, deletion of document should be updated in the index, etc. I want to avoid single point of failure.

I would like to know whether my goal above is achievable using Elasticsearch.

Currently I have configured two instances of the Elasticsearch service running on two different machine with the below configuration changes from the default elasticsearch.yml file.

elasticsearch.yml on "server1" server.

node.name: "server1"
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["149.59.34.195:9300", "149.59.96.99:9300"]

elasticsearch.yml on "server2" server.

node.name: "server2"
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["149.59.34.195:9300", "149.59.96.99:9300"]
Is the above configuration sufficient to achieve my above goal?
Would the index files gets auto synced between the two machines?
If one of the server/service, say server2 goes down, would the search engine still function using server1?
What happens when the server2 gets back online? Would the index file from server1 gets synced to server2?
Let's say server1 is up, sever2 is down for about 30 minutes. During this time some index files were modified on server1. What happens if the server1 was shut down while server2 was down, and if we bring back server2 at first and server1 later? Would the updated index from server1 will get pushed to server2?
I hope my questions above made sense to you. I am kind of bit confused with the terms node, cluster and gateway. My guess is that the above two services running on server1 and serve2 are two different nodes.

I am planning to connect to these services using the Java code below.

    Client client = new TransportClient()
    .addTransportAddress(new InetSocketTransportAddress("149.59.34.195", 9300))
    .addTransportAddress(new InetSocketTransportAddress("149.59.96.99", 9300));

Thanks
Renjith

--

David, Thank you so much for the pointer. The slides answered most of my
questions..

Would you please take a look at the configuration file I am using currently
to make sure that I am going the right direction.

Thanks you

On Wednesday, November 7, 2012 6:29:54 PM UTC-8, David Pilato wrote:

Hi,

In short: YES !

ES is designed for all your concerns.

I suggest that you look at some videos on ES site or at my slides on
slideshare:
Elasticsearch - Devoxx France 2012 - English version | PPT
The last slides explains how docs are indexed, how searches works and
failover feature...

HTH

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 7 nov. 2012 à 23:01, Mxims <renji...@gmail.com <javascript:>> a écrit :

My goal is to have a search engine service which is reliable and highly
available. If one of the server goes down, users should still be able to
search, new documents should still be indexed, if there is an update to a
document, update should be indexed, deletion of document should be updated
in the index, etc. I want to avoid single point of failure.

I would like to know whether my goal above is achievable using
Elasticsearch.

Currently I have configured two instances of the Elasticsearch service
running on two different machine with the below configuration changes from
the default elasticsearch.yml file.

elasticsearch.yml on "server1" server.

node.name: "server1"
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["149.59.34.195:9300", "
149.59.96.99:9300"]

elasticsearch.yml on "server2" server.

node.name: "server2"
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["149.59.34.195:9300", "
149.59.96.99:9300"]

  • Is the above configuration sufficient to achieve my above goal?
  • Would the index files gets auto synced between the two machines?
  • If one of the server/service, say server2 goes down, would the
    search engine still function using server1?
    • What happens when the server2 gets back online? Would the index
      file from server1 gets synced to server2?
  • Let's say server1 is up, sever2 is down for about 30 minutes. During
    this time some index files were modified on server1. What happens if the
    server1 was shut down while server2 was down, and if we bring back server2
    at first and server1 later? Would the updated index from server1 will get
    pushed to server2?

I hope my questions above made sense to you. I am kind of bit confused
with the terms node, cluster and gateway. My guess is that the above two
services running on server1 and serve2 are two different nodes.

I am planning to connect to these services using the Java code below.

    Client client = new TransportClient()
    .addTransportAddress(new 

InetSocketTransportAddress("149.59.34.195", 9300))
.addTransportAddress(new
InetSocketTransportAddress("149.59.96.99", 9300));

Thanks
Renjith

--

--

I think that everything is fine with your settings.

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 8 nov. 2012 à 19:49, Mxims renjith.ck@gmail.com a écrit :

David, Thank you so much for the pointer. The slides answered most of my questions..

Would you please take a look at the configuration file I am using currently to make sure that I am going the right direction.

Thanks you

On Wednesday, November 7, 2012 6:29:54 PM UTC-8, David Pilato wrote:
Hi,

In short: YES !

ES is designed for all your concerns.

I suggest that you look at some videos on ES site or at my slides on slideshare: Elasticsearch - Devoxx France 2012 - English version | PPT
The last slides explains how docs are indexed, how searches works and failover feature...

HTH

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 7 nov. 2012 à 23:01, Mxims renji...@gmail.com a écrit :

My goal is to have a search engine service which is reliable and highly available. If one of the server goes down, users should still be able to search, new documents should still be indexed, if there is an update to a document, update should be indexed, deletion of document should be updated in the index, etc. I want to avoid single point of failure.

I would like to know whether my goal above is achievable using Elasticsearch.

Currently I have configured two instances of the Elasticsearch service running on two different machine with the below configuration changes from the default elasticsearch.yml file.

elasticsearch.yml on "server1" server.

node.name: "server1"
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["149.59.34.195:9300", "149.59.96.99:9300"]

elasticsearch.yml on "server2" server.

node.name: "server2"
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["149.59.34.195:9300", "149.59.96.99:9300"]
Is the above configuration sufficient to achieve my above goal?
Would the index files gets auto synced between the two machines?
If one of the server/service, say server2 goes down, would the search engine still function using server1?
What happens when the server2 gets back online? Would the index file from server1 gets synced to server2?
Let's say server1 is up, sever2 is down for about 30 minutes. During this time some index files were modified on server1. What happens if the server1 was shut down while server2 was down, and if we bring back server2 at first and server1 later? Would the updated index from server1 will get pushed to server2?
I hope my questions above made sense to you. I am kind of bit confused with the terms node, cluster and gateway. My guess is that the above two services running on server1 and serve2 are two different nodes.

I am planning to connect to these services using the Java code below.

    Client client = new TransportClient()
    .addTransportAddress(new InetSocketTransportAddress("149.59.34.195", 9300))
    .addTransportAddress(new InetSocketTransportAddress("149.59.96.99", 9300));

Thanks
Renjith

--

--

Hi Renjith!

I'm with the same scenario over here with the little difference that I'm setting the network.publish_host with the machine ip.

But I was making some tests and one of them was to unplug the lan cable from one machine and plug it again to see if the cluster will discovery the nodes again. But it didn't happen, when I do this each one turn into master node and didn't see each other anymore...

Somebody knows what should I set to fix this?

Thanks!