SImple question: Is elasticsearch production use ready?

We're using MongoDB with dynamic collection schema and elasticsearch
looks like what we need now. Does someone use it in production
environment with high traffic?
Is it stable?

Short answer is "sure!".

Take a look at some prominent users here
Elasticsearch Platform — Find real-time answers at scale | Elastic or browse the mailing list - there
are plenty of use cases mentioned here.

On 18.10.2011 14:22, drul wrote:

We're using MongoDB with dynamic collection schema and elasticsearch
looks like what we need now. Does someone use it in production
environment with high traffic?
Is it stable?

Yes. There are lots of people using it with vast volumes of data,
there are many using it for regular searches, and it's very stable in
all the usual meanings of the word.

As suggested, have a look at Elasticsearch Platform — Find real-time answers at scale | Elastic,
where you'll find some case studies.

On Oct 18, 1:22 pm, drul tomek.kloc....@gmail.com wrote:

We're using MongoDB with dynamic collection schema and elasticsearch
looks like what we need now. Does someone use it in production
environment with high traffic?
Is it stable?

We have been running ES for just about a year in production. It has been
far, far more stable (not to mention 20x faster) than the enterprise search
engine we replaced.

David

How do you manage upgrades? One hesitation I have with ElasticSearch
currently is that it seems a full outage would be necessary or else a
separate cluster in order to perform an upgrade as it's not possible
(currently) to perform rolling upgrades by taking a node out of service,
upgrading it and then having it re-connect to the cluster containing some
nodes with an older release version.

Corey

Multiple data centers :slight_smile:

You are correct, though, rolling upgrades between major version will
not always work and I wouldn't recommend trying.

For our dev/QA envs, we don't have multiple DCs, however, with a
simple install script, I can do an upgrade and be back to yellow
within 2 minutes.

Best Regards,
Paul

On Oct 25, 11:13 am, Yeroc cpuff...@gmail.com wrote:

How do you manage upgrades? One hesitation I have with Elasticsearch
currently is that it seems a full outage would be necessary or else a
separate cluster in order to perform an upgrade as it's not possible
(currently) to perform rolling upgrades by taking a node out of service,
upgrading it and then having it re-connect to the cluster containing some
nodes with an older release version.

Corey

On Tue, 2011-10-25 at 10:13 -0700, Yeroc wrote:

How do you manage upgrades? One hesitation I have with Elasticsearch
currently is that it seems a full outage would be necessary or else a
separate cluster in order to perform an upgrade as it's not possible
(currently) to perform rolling upgrades by taking a node out of
service, upgrading it and then having it re-connect to the cluster
containing some nodes with an older release version.

This is not perfect, but gets me pretty close to a clean upgrade on a
live system:

  • We are running two nodes

  • Our indices are set up with 1 replica, so all data is on both
    nodes

  • We are using the Perl API which has failover builtin: ie it accepts
    a default list of nodes to try to connect to. If any node fails, then
    it uses the live and default node lists to find a new live-list of
    nodes.

  • First: back up your ./data/ dir

  • Shut down node 2, and upgrade ES (clients fall back to just
    using node 1)

  • Change the cluster name for node 2, and move the
    ./data/CLUSTERNAME directory to match the new cluster name

  • Block port 9200 on the firewall (to stop node2 from responding to
    client requests)

  • Start node 2 -> it recovers

  • Run a script which updates node2 with any recent changes on
    node 1

  • Unblock port 9200 on node 2, and simultaneously block port 9200
    on node 1 -> all clients switch from node1 to node2

  • Rerun the recent-changes script

  • Shut down node 1, upgrade, change the cluster name, move
    the ./data dir, restart, wait for recovery, and unblock port 9200

  • Run a final script to check and correct any data inconsistencies
    which might have crept in

clint

hey,
we pushed elasticsearch into production more than 1 year,
stable and powerful.

-----Original Message-----
From: Clinton Gormley
Sent: Wednesday, October 26, 2011 1:21 PM
To: elasticsearch@googlegroups.com
Subject: Re: SImple question: Is elasticsearch production use ready?

On Tue, 2011-10-25 at 10:13 -0700, Yeroc wrote:

How do you manage upgrades? One hesitation I have with Elasticsearch
currently is that it seems a full outage would be necessary or else a
separate cluster in order to perform an upgrade as it's not possible
(currently) to perform rolling upgrades by taking a node out of
service, upgrading it and then having it re-connect to the cluster
containing some nodes with an older release version.

This is not perfect, but gets me pretty close to a clean upgrade on a
live system:

  • We are running two nodes

  • Our indices are set up with 1 replica, so all data is on both
    nodes

  • We are using the Perl API which has failover builtin: ie it accepts
    a default list of nodes to try to connect to. If any node fails, then
    it uses the live and default node lists to find a new live-list of
    nodes.

  • First: back up your ./data/ dir

  • Shut down node 2, and upgrade ES (clients fall back to just
    using node 1)

  • Change the cluster name for node 2, and move the
    ./data/CLUSTERNAME directory to match the new cluster name

  • Block port 9200 on the firewall (to stop node2 from responding to
    client requests)

  • Start node 2 -> it recovers

  • Run a script which updates node2 with any recent changes on
    node 1

  • Unblock port 9200 on node 2, and simultaneously block port 9200
    on node 1 -> all clients switch from node1 to node2

  • Rerun the recent-changes script

  • Shut down node 1, upgrade, change the cluster name, move
    the ./data dir, restart, wait for recovery, and unblock port 9200

  • Run a final script to check and correct any data inconsistencies
    which might have crept in

clint

we pushed elasticsearch into production more than 1 year,

out of curiosity, how do you handle major version upgrades?

looking at ES's download page on github [1], there have been quite a few
major version upgrades in the last few years.

[1] https://github.com/elasticsearch/elasticsearch/downloads

On Tue, Oct 25, 2011 at 11:13 PM, medcl2000@gmail.com wrote:

hey,
we pushed elasticsearch into production more than 1 year,
stable and powerful.

-----Original Message----- From: Clinton Gormley Sent: Wednesday, October
26, 2011 1:21 PM To: elasticsearch@googlegroups.com Subject: Re: SImple
question: Is elasticsearch production use ready?
On Tue, 2011-10-25 at 10:13 -0700, Yeroc wrote:

How do you manage upgrades? One hesitation I have with Elasticsearch
currently is that it seems a full outage would be necessary or else a
separate cluster in order to perform an upgrade as it's not possible
(currently) to perform rolling upgrades by taking a node out of
service, upgrading it and then having it re-connect to the cluster
containing some nodes with an older release version.

This is not perfect, but gets me pretty close to a clean upgrade on a
live system:

  • We are running two nodes

  • Our indices are set up with 1 replica, so all data is on both nodes

  • We are using the Perl API which has failover builtin: ie it accepts a
    default list of nodes to try to connect to. If any node fails, then
    it uses the live and default node lists to find a new live-list of
    nodes.

  • First: back up your ./data/ dir

  • Shut down node 2, and upgrade ES (clients fall back to just
    using node 1)

  • Change the cluster name for node 2, and move the ./data/CLUSTERNAME
    directory to match the new cluster name

  • Block port 9200 on the firewall (to stop node2 from responding to
    client requests)

  • Start node 2 -> it recovers

  • Run a script which updates node2 with any recent changes on
    node 1

  • Unblock port 9200 on node 2, and simultaneously block port 9200
    on node 1 -> all clients switch from node1 to node2

  • Rerun the recent-changes script

  • Shut down node 1, upgrade, change the cluster name, move
    the ./data dir, restart, wait for recovery, and unblock port 9200

  • Run a final script to check and correct any data inconsistencies which
    might have crept in
    clint

--
Frank Hsueh | frank.hsueh@gmail.com