Shard count and plugin questions


(Todd Nine) #1

Hi All,
We've been using elastic search as our search index for our new
persistence implementation.

I have a few questions I could use a hand with.

  1. Is there any good documentation on the upper limit to count of
    documents, or total index size, before you need to allocate more shards?
    Do shards have a real world limit on size or number of entries to keep
    response times low? Every system has it's limits, and I'm trying to find
    some actual data on the size limits. I've been trolling Google for some
    answers, but I haven't really found any good test results.

  2. Currently, it's not possible to increase the shard count for an index.
    The workaround is to create a new index with a higher count, and move
    documents from the old index into the new. Could this be accomplished via
    a plugin?

  3. We sometimes have "realtime" requirements. In that when an index call
    is returned, it is available. Flushing explicitly is not a good idea from
    a performance perspective. Has anyone explored searching in memory the
    documents that have not yet been flushed and merging them with the Lucene
    results? Is this something that's feasible to be implemented via a plugin?

Thanks in advance!
Todd

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/940c6404-6667-4846-b457-977e705d3797%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


(Mark Walkom) #2
  1. The answer is - it depends. You want to setup a test system with
    indicative specs, and then throw some sample data at it until things start
    to break. However this may help
    https://www.found.no/foundation/sizing-elasticsearch/
  2. https://github.com/jprante/elasticsearch-knapsack might do what you want.
  3. How real time is real time? You can change index.refresh_interval to
    something small so that window of "unflushed" items is minimal, but that
    will have other impacts.

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: markw@campaignmonitor.com
web: www.campaignmonitor.com

On 5 June 2014 04:18, Todd Nine tnine@apigee.com wrote:

Hi All,
We've been using elastic search as our search index for our new
persistence implementation.

https://usergrid.incubator.apache.org/

I have a few questions I could use a hand with.

  1. Is there any good documentation on the upper limit to count of
    documents, or total index size, before you need to allocate more shards?
    Do shards have a real world limit on size or number of entries to keep
    response times low? Every system has it's limits, and I'm trying to find
    some actual data on the size limits. I've been trolling Google for some
    answers, but I haven't really found any good test results.

  2. Currently, it's not possible to increase the shard count for an index.
    The workaround is to create a new index with a higher count, and move
    documents from the old index into the new. Could this be accomplished via
    a plugin?

  3. We sometimes have "realtime" requirements. In that when an index call
    is returned, it is available. Flushing explicitly is not a good idea from
    a performance perspective. Has anyone explored searching in memory the
    documents that have not yet been flushed and merging them with the Lucene
    results? Is this something that's feasible to be implemented via a plugin?

Thanks in advance!
Todd

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/940c6404-6667-4846-b457-977e705d3797%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/940c6404-6667-4846-b457-977e705d3797%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAEM624aheN2C4wvZRnxxNA%3DpTzwgjHQwCLH0041d-J0DNj37_A%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


(Todd Nine) #3

Thanks for the answers Mark. See inline.

On Wed, Jun 4, 2014 at 3:51 PM, Mark Walkom markw@campaignmonitor.com
wrote:

  1. The answer is - it depends. You want to setup a test system with
    indicative specs, and then throw some sample data at it until things start
    to break. However this may help
    https://www.found.no/foundation/sizing-elasticsearch/

This is what I was expecting. Thanks for the pointer to the documentation.
We're going to have some pretty beefy clusters (SSDs Raid 0, 8 to 16 cores
and a lot of RAM) to power ES. We're going to have a LOT of indexes, we
would be operating this as a core infrastructure service. Is there an
upper limit on the amount of indexes a cluster can hold?

  1. https://github.com/jprante/elasticsearch-knapsack might do what you
    want.

This won't quite work for us. We can't have any down time, so it seems
like an A/B system is more appropriate. What we're currently thinking is
the following.

Each index has 2 aliases, a read and a write alias.

  1. Both read and write aliases point to an initial index. Say shard count 5
    replication 2 (ES is not our canonical data source, so we're ok with
    reconstructing search data)

  2. We detect via monitoring we're going to outgrow an index. We create a
    new index with more shards, and potentially a higher replication depending
    on read load. We then update the write alias to point to both the old and
    new index. All clients will then being dual writes to both indexes.

  3. While we're writing to old and new, some process (maybe a river?) will
    begin copying documents updated < the write alias time from the old index
    to the new index. Ideally, it would be nice if each replica could copy
    only it's local documents into the new index. We'll want to throttle this
    as well. Each node will need additional operational capacity
    to accommodate the dual writes as well as accepting the write of the "old"
    documents. I'm concerned if we push this through too fast, we could cause
    interruptions of service.

  4. Once the copy is completed, the read index is moved to the new index,
    then the old index is removed from the system.

Could such a process be implemented as a plugin? If the work can happen in
parallel across all nodes containing a shard we can increase the process's
speed dramatically. If we have a single worker, like a river, it might
possibly take too long.

  1. How real time is real time? You can change index.refresh_interval to

something small so that window of "unflushed" items is minimal, but that
will have other impacts.

Once the index call returns to the caller, it would be immediately
available for query. We're tried lowering the refresh rate, this results
is a pretty significant drop in throughput. To meet our throughput
requirements, we're considering even turning it up to 5 or 15 seconds. If
we can then search this data that's in our commit log (via storing it in
memory until flush) that would be ideal.

Thoughts?

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: markw@campaignmonitor.com
web: www.campaignmonitor.com

On 5 June 2014 04:18, Todd Nine tnine@apigee.com wrote:

Hi All,
We've been using elastic search as our search index for our new
persistence implementation.

https://usergrid.incubator.apache.org/

I have a few questions I could use a hand with.

  1. Is there any good documentation on the upper limit to count of
    documents, or total index size, before you need to allocate more shards?
    Do shards have a real world limit on size or number of entries to keep
    response times low? Every system has it's limits, and I'm trying to find
    some actual data on the size limits. I've been trolling Google for some
    answers, but I haven't really found any good test results.

  2. Currently, it's not possible to increase the shard count for an index.
    The workaround is to create a new index with a higher count, and move
    documents from the old index into the new. Could this be accomplished via
    a plugin?

  3. We sometimes have "realtime" requirements. In that when an index call
    is returned, it is available. Flushing explicitly is not a good idea from
    a performance perspective. Has anyone explored searching in memory the
    documents that have not yet been flushed and merging them with the Lucene
    results? Is this something that's feasible to be implemented via a plugin?

Thanks in advance!
Todd

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.

To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/940c6404-6667-4846-b457-977e705d3797%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/940c6404-6667-4846-b457-977e705d3797%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/4qO5BZSxWhc/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAEM624aheN2C4wvZRnxxNA%3DpTzwgjHQwCLH0041d-J0DNj37_A%40mail.gmail.com
https://groups.google.com/d/msgid/elasticsearch/CAEM624aheN2C4wvZRnxxNA%3DpTzwgjHQwCLH0041d-J0DNj37_A%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CA%2Byzqf-VYZpBh6b_%2Br8W0fBs8b%3DU65gtjzt8PLe4uVx_b3nEDQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


(Mark Walkom) #4

I haven't heard of a limit to the number of indexes, obviously the more you
have the larger the cluster state that needs to be maintained.

You might want to look into routing (
http://exploringelasticsearch.com/advanced_techniques.html or
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/mapping-routing-field.html)
as an alternative to optimise and minimise index count.
You can also always hedge your bets and create an index with a larger
number of shards, ie not a 1:1, shard:node relationship, and then move the
excess shards to new nodes as they are added.

I'd be interested to see how you could measure how you'd outgrow an index
though, technically it can just keep growing until the node can no longer
deal with it. This is something that testing is good for, throw data at a
single shard index and then when it falls over you have an indicator of how
your hardware will handle things.

As for reading the transaction log and searching it, you might be playing a
losing game as your code to parse and search would have to be super quick
to make worth doing.

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: markw@campaignmonitor.com
web: www.campaignmonitor.com

On 5 June 2014 15:33, Todd Nine tnine@apigee.com wrote:

Thanks for the answers Mark. See inline.

On Wed, Jun 4, 2014 at 3:51 PM, Mark Walkom markw@campaignmonitor.com
wrote:

  1. The answer is - it depends. You want to setup a test system with
    indicative specs, and then throw some sample data at it until things start
    to break. However this may help
    https://www.found.no/foundation/sizing-elasticsearch/

This is what I was expecting. Thanks for the pointer to the
documentation. We're going to have some pretty beefy clusters (SSDs Raid
0, 8 to 16 cores and a lot of RAM) to power ES. We're going to have a LOT
of indexes, we would be operating this as a core infrastructure service.
Is there an upper limit on the amount of indexes a cluster can hold?

  1. https://github.com/jprante/elasticsearch-knapsack might do what you
    want.

This won't quite work for us. We can't have any down time, so it seems
like an A/B system is more appropriate. What we're currently thinking is
the following.

Each index has 2 aliases, a read and a write alias.

  1. Both read and write aliases point to an initial index. Say shard count
    5 replication 2 (ES is not our canonical data source, so we're ok with
    reconstructing search data)

  2. We detect via monitoring we're going to outgrow an index. We create a
    new index with more shards, and potentially a higher replication depending
    on read load. We then update the write alias to point to both the old and
    new index. All clients will then being dual writes to both indexes.

  3. While we're writing to old and new, some process (maybe a river?) will
    begin copying documents updated < the write alias time from the old index
    to the new index. Ideally, it would be nice if each replica could copy
    only it's local documents into the new index. We'll want to throttle this
    as well. Each node will need additional operational capacity
    to accommodate the dual writes as well as accepting the write of the "old"
    documents. I'm concerned if we push this through too fast, we could cause
    interruptions of service.

  4. Once the copy is completed, the read index is moved to the new index,
    then the old index is removed from the system.

Could such a process be implemented as a plugin? If the work can happen
in parallel across all nodes containing a shard we can increase the
process's speed dramatically. If we have a single worker, like a river, it
might possibly take too long.

  1. How real time is real time? You can change index.refresh_interval to

something small so that window of "unflushed" items is minimal, but that
will have other impacts.

Once the index call returns to the caller, it would be immediately
available for query. We're tried lowering the refresh rate, this results
is a pretty significant drop in throughput. To meet our throughput
requirements, we're considering even turning it up to 5 or 15 seconds. If
we can then search this data that's in our commit log (via storing it in
memory until flush) that would be ideal.

Thoughts?

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: markw@campaignmonitor.com
web: www.campaignmonitor.com

On 5 June 2014 04:18, Todd Nine tnine@apigee.com wrote:

Hi All,
We've been using elastic search as our search index for our new
persistence implementation.

https://usergrid.incubator.apache.org/

I have a few questions I could use a hand with.

  1. Is there any good documentation on the upper limit to count of
    documents, or total index size, before you need to allocate more shards?
    Do shards have a real world limit on size or number of entries to keep
    response times low? Every system has it's limits, and I'm trying to find
    some actual data on the size limits. I've been trolling Google for some
    answers, but I haven't really found any good test results.

  2. Currently, it's not possible to increase the shard count for an
    index. The workaround is to create a new index with a higher count, and
    move documents from the old index into the new. Could this be accomplished
    via a plugin?

  3. We sometimes have "realtime" requirements. In that when an index
    call is returned, it is available. Flushing explicitly is not a good idea
    from a performance perspective. Has anyone explored searching in memory
    the documents that have not yet been flushed and merging them with the
    Lucene results? Is this something that's feasible to be implemented via a
    plugin?

Thanks in advance!
Todd

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearch+unsubscribe@googlegroups.com.

To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/940c6404-6667-4846-b457-977e705d3797%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/940c6404-6667-4846-b457-977e705d3797%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/4qO5BZSxWhc/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAEM624aheN2C4wvZRnxxNA%3DpTzwgjHQwCLH0041d-J0DNj37_A%40mail.gmail.com
https://groups.google.com/d/msgid/elasticsearch/CAEM624aheN2C4wvZRnxxNA%3DpTzwgjHQwCLH0041d-J0DNj37_A%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CA%2Byzqf-VYZpBh6b_%2Br8W0fBs8b%3DU65gtjzt8PLe4uVx_b3nEDQ%40mail.gmail.com
https://groups.google.com/d/msgid/elasticsearch/CA%2Byzqf-VYZpBh6b_%2Br8W0fBs8b%3DU65gtjzt8PLe4uVx_b3nEDQ%40mail.gmail.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAEM624b7A2vdvf2R7_EziPwh2o6BAAHrMqMJOk4cW%2BAY7ind5A%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


(Todd Nine) #5

Thanks for the feedback Mark.

I agree with your thoughts on the testing. We plan on doing some testing,
find our failure point, and dial that back to some value that allows us to
still run the migration. This way, we can get ahead of the problem. Since
a re-index would actually introduce more load on the system, we want to
perform it before we start to get too much of a performance problem on any
given index.

I wanted to clarify my thoughts on the realtime read.

I definitely do not want to read the commit log, that would be horribly
inefficient since it's just a log. Rather, I would insert an in memory
cache into the write path. The in memory cache would be written to as well
as the commit log. When a query is executed on a node, it would query
Lucene (the shard), as well as search the node's local cache for matching
documents. These are then merged into a single result set via whatever
ordering parameter is set. When the documents are flushed to Lucene, they
would be removed from the cache.

I'm sure I'm not the first ES user to conceive this idea. However, ES
doesn't have this functionality, and I'm assuming there's a reason for it.
Is it simply no one has tried this use case, or is it not a good idea
technically? Would it introduce too much memory overhead etc?

Thanks,
Todd

On Wed, Jun 4, 2014 at 11:05 PM, Mark Walkom markw@campaignmonitor.com
wrote:

I haven't heard of a limit to the number of indexes, obviously the more
you have the larger the cluster state that needs to be maintained.

You might want to look into routing (
http://exploringelasticsearch.com/advanced_techniques.html or
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/mapping-routing-field.html)
as an alternative to optimise and minimise index count.
You can also always hedge your bets and create an index with a larger
number of shards, ie not a 1:1, shard:node relationship, and then move the
excess shards to new nodes as they are added.

I'd be interested to see how you could measure how you'd outgrow an
index though, technically it can just keep growing until the node can no
longer deal with it. This is something that testing is good for, throw data
at a single shard index and then when it falls over you have an indicator
of how your hardware will handle things.

As for reading the transaction log and searching it, you might be playing
a losing game as your code to parse and search would have to be super quick
to make worth doing.

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: markw@campaignmonitor.com
web: www.campaignmonitor.com

On 5 June 2014 15:33, Todd Nine tnine@apigee.com wrote:

Thanks for the answers Mark. See inline.

On Wed, Jun 4, 2014 at 3:51 PM, Mark Walkom markw@campaignmonitor.com
wrote:

  1. The answer is - it depends. You want to setup a test system with
    indicative specs, and then throw some sample data at it until things start
    to break. However this may help
    https://www.found.no/foundation/sizing-elasticsearch/

This is what I was expecting. Thanks for the pointer to the
documentation. We're going to have some pretty beefy clusters (SSDs Raid
0, 8 to 16 cores and a lot of RAM) to power ES. We're going to have a LOT
of indexes, we would be operating this as a core infrastructure service.
Is there an upper limit on the amount of indexes a cluster can hold?

  1. https://github.com/jprante/elasticsearch-knapsack might do what you
    want.

This won't quite work for us. We can't have any down time, so it seems
like an A/B system is more appropriate. What we're currently thinking is
the following.

Each index has 2 aliases, a read and a write alias.

  1. Both read and write aliases point to an initial index. Say shard count
    5 replication 2 (ES is not our canonical data source, so we're ok with
    reconstructing search data)

  2. We detect via monitoring we're going to outgrow an index. We create a
    new index with more shards, and potentially a higher replication depending
    on read load. We then update the write alias to point to both the old and
    new index. All clients will then being dual writes to both indexes.

  3. While we're writing to old and new, some process (maybe a river?) will
    begin copying documents updated < the write alias time from the old index
    to the new index. Ideally, it would be nice if each replica could copy
    only it's local documents into the new index. We'll want to throttle this
    as well. Each node will need additional operational capacity
    to accommodate the dual writes as well as accepting the write of the "old"
    documents. I'm concerned if we push this through too fast, we could cause
    interruptions of service.

  4. Once the copy is completed, the read index is moved to the new index,
    then the old index is removed from the system.

Could such a process be implemented as a plugin? If the work can happen
in parallel across all nodes containing a shard we can increase the
process's speed dramatically. If we have a single worker, like a river, it
might possibly take too long.

  1. How real time is real time? You can change index.refresh_interval to

something small so that window of "unflushed" items is minimal, but that
will have other impacts.

Once the index call returns to the caller, it would be immediately
available for query. We're tried lowering the refresh rate, this results
is a pretty significant drop in throughput. To meet our throughput
requirements, we're considering even turning it up to 5 or 15 seconds. If
we can then search this data that's in our commit log (via storing it in
memory until flush) that would be ideal.

Thoughts?

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: markw@campaignmonitor.com
web: www.campaignmonitor.com

On 5 June 2014 04:18, Todd Nine tnine@apigee.com wrote:

Hi All,
We've been using elastic search as our search index for our new
persistence implementation.

https://usergrid.incubator.apache.org/

I have a few questions I could use a hand with.

  1. Is there any good documentation on the upper limit to count of
    documents, or total index size, before you need to allocate more shards?
    Do shards have a real world limit on size or number of entries to keep
    response times low? Every system has it's limits, and I'm trying to find
    some actual data on the size limits. I've been trolling Google for some
    answers, but I haven't really found any good test results.

  2. Currently, it's not possible to increase the shard count for an
    index. The workaround is to create a new index with a higher count, and
    move documents from the old index into the new. Could this be accomplished
    via a plugin?

  3. We sometimes have "realtime" requirements. In that when an index
    call is returned, it is available. Flushing explicitly is not a good idea
    from a performance perspective. Has anyone explored searching in memory
    the documents that have not yet been flushed and merging them with the
    Lucene results? Is this something that's feasible to be implemented via a
    plugin?

Thanks in advance!
Todd

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearch+unsubscribe@googlegroups.com.

To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/940c6404-6667-4846-b457-977e705d3797%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/940c6404-6667-4846-b457-977e705d3797%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/4qO5BZSxWhc/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAEM624aheN2C4wvZRnxxNA%3DpTzwgjHQwCLH0041d-J0DNj37_A%40mail.gmail.com
https://groups.google.com/d/msgid/elasticsearch/CAEM624aheN2C4wvZRnxxNA%3DpTzwgjHQwCLH0041d-J0DNj37_A%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CA%2Byzqf-VYZpBh6b_%2Br8W0fBs8b%3DU65gtjzt8PLe4uVx_b3nEDQ%40mail.gmail.com
https://groups.google.com/d/msgid/elasticsearch/CA%2Byzqf-VYZpBh6b_%2Br8W0fBs8b%3DU65gtjzt8PLe4uVx_b3nEDQ%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/4qO5BZSxWhc/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAEM624b7A2vdvf2R7_EziPwh2o6BAAHrMqMJOk4cW%2BAY7ind5A%40mail.gmail.com
https://groups.google.com/d/msgid/elasticsearch/CAEM624b7A2vdvf2R7_EziPwh2o6BAAHrMqMJOk4cW%2BAY7ind5A%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CA%2Byzqf9AJC2NcPTVST2VX%2B0swr76Oh6uH2-j0yunzjbK%2B42_Hg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


(Jörg Prante) #6

The knapsack plugin does not come with a downtime. You can increase shards
on the fly by copying an index over to another index (even on another
cluster). The index should be write disabled during copy though.

Increasing replica level is a very simple command, no index copy required.

It seems you have a slight misconception about controlling replica shards.
You can not start dedicated copy actions only from the replica. (By setting
_preference for search, this works for queries).

Maybe I do not understand your question, but what do you mean by "dual
writes"? And why would you "move" an index?

Please check the index aliases. The concept of index aliases allow
redirecting index names in the API by a simple atomic command.

It will be tough to monitor an outgrowing index since there is no clear
indication of the type "this cluster capacity is full because the index is
too large or overloaded, please add your nodes now". In real life, heaps
will fill up here and there, latency will increase, all of a sudden queries
or indexing will congest now and then. If you encounter this, you have no
time to copy an old index to a new one - the copy process also takes
resources, and the cluster may not have enough. You must begin to add nodes
way before capacity limit is reached.

Instead of copying an index, which is a burden, you should consider
managing a bunch of indices. If an old index is too small, just start a new
one which is bigger and has more shards and spans more nodes, and add them
to the existing set of indices. With index alias you can combine many
indices into one index name. This is very powerful.

If you can not estimate the data growth rate, I recommend also to use a
reasonable number of shards from the very start. Say, if you expect 50
servers to run an ES node on, then simply start with 50 shards on a small
number of servers, and add servers over time. You won't have to bother
about shard count for a very long time if you choose such a strategy.

Do not think about rivers, they are not built for such use cases. Rivers
are designed as a "play tool" for fetching data quickly from external
sources, for demo purpose. They are discouraged for serious production use,
they are not very reliable if they run unattended.

Jörg

On Thu, Jun 5, 2014 at 7:33 AM, Todd Nine tnine@apigee.com wrote:

  1. https://github.com/jprante/elasticsearch-knapsack might do what you
    want.

This won't quite work for us. We can't have any down time, so it seems
like an A/B system is more appropriate. What we're currently thinking is
the following.

Each index has 2 aliases, a read and a write alias.

  1. Both read and write aliases point to an initial index. Say shard count
    5 replication 2 (ES is not our canonical data source, so we're ok with
    reconstructing search data)

  2. We detect via monitoring we're going to outgrow an index. We create a
    new index with more shards, and potentially a higher replication depending
    on read load. We then update the write alias to point to both the old and
    new index. All clients will then being dual writes to both indexes.

  3. While we're writing to old and new, some process (maybe a river?) will
    begin copying documents updated < the write alias time from the old index
    to the new index. Ideally, it would be nice if each replica could copy
    only it's local documents into the new index. We'll want to throttle this
    as well. Each node will need additional operational capacity
    to accommodate the dual writes as well as accepting the write of the "old"
    documents. I'm concerned if we push this through too fast, we could cause
    interruptions of service.

  4. Once the copy is completed, the read index is moved to the new index,
    then the old index is removed from the system.

Could such a process be implemented as a plugin? If the work can happen
in parallel across all nodes containing a shard we can increase the
process's speed dramatically. If we have a single worker, like a river, it
might possibly take too long.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAKdsXoGcxRGjZr%3DS9%3DSaOZYddWyNxjvFNJF41T_hHe7inoZOwQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


(Todd Nine) #7

Hey Jörg,
Thank you for your response. A few questions/points.

In our use cases, the inability to write or read is considered a downtime.
Therefore, I cannot disable writes during expansion. Your alias points
raise
some interesting research I need to do, and I have a few follow up
questions.

Our systems are fully multi tenant. We currently intend to have 1 index
per application. Each application can have a large number of types within
their index. Each cluster could potentially hold 1000 or more
applications/indexes. Most users will never need more than 5 shards.
Some users are huge power users, billions of documents for a type with
several large types. These are the users I am concerned with.

From my understanding and experimentation, Elastic Search has 2 primary
mechanisms for tuning performance to handle the load. First is the shard
count. The higher the shard count, the more writes you can accept. Each
shard has a master which accepts the write, and replicates the write to
it's replicas. For high write throughput, you increase the count of shards
to distribute the load across more nodes. For read throughput, you
increase the replica count. This gives you higher performance on read,
since you now have more than 1 node per shard you can query to get results.

Per your suggestion, rather than than copy the documents from an index with
5 shards to an index with 10 shards, I can theoretically create a new index
then add it the alias. For instance, I envision this in the following way.

Aliases:
app1-read
app1-write

Initial creation:

app1-read -> Index: App1-index1 (5 shards, 2 replicas)
app1-write -> Index: App1-index1

User begins to have too much data in App1-index1. The 5 shards are causing
hotspots. The following actions take place.

  1. Create App1-index2 (5 shards, 2 replicas)

  2. Update app1-read -> App1-index1 and App1-index2

  3. Update app1-read -> App1-index1 and App1-index2

I have some uncertainty around this I could use help with.

Once the new index has been added, how are requests routed? For instance,
if I have a document "doc1" in the App1-index1, and I delete it after
adding the new index, is the alias smart enough to update App1-index1, or
will it broadcast the operation to both indexes? In other words, if I
create an alias with 2 or more indexes, will the alias perform routing, or
is it a broadcast to all indexes. How does this scale in practice?

In my current understanding, using an alias on read is simply going to be
the same as if you doubled shards. If you have 10 shard (replication 2)
index, this is functionally equivalent to aliases that aggregate 2 indexes
with 5 shards and 2 replicas each. All 10 shards (or one of their
replicas) would need to be read to aggregate the results, correct?

Lastly, we have a Multi Region requirement. We are by eventually
consistent by design between regions. We want documents written in one
region to be replicated to another for query. All documents are immutable.
This is by design, so we don't get document version collisions between
data centers. What are some current mechanisms in use in production
environments to replicate indexes across regions? I just can't seem to
find any. Rivers was my initial thinking so regions can pull data from
other regions. However if this isn't a good fit, what is?

Thanks guys!
Todd

On Thu, Jun 5, 2014 at 9:21 AM, joergprante@gmail.com <joergprante@gmail.com

wrote:

The knapsack plugin does not come with a downtime. You can increase shards
on the fly by copying an index over to another index (even on another
cluster). The index should be write disabled during copy though.

Increasing replica level is a very simple command, no index copy required.

It seems you have a slight misconception about controlling replica shards.
You can not start dedicated copy actions only from the replica. (By setting
_preference for search, this works for queries).

Maybe I do not understand your question, but what do you mean by "dual
writes"? And why would you "move" an index?

Please check the index aliases. The concept of index aliases allow
redirecting index names in the API by a simple atomic command.

It will be tough to monitor an outgrowing index since there is no clear
indication of the type "this cluster capacity is full because the index is
too large or overloaded, please add your nodes now". In real life, heaps
will fill up here and there, latency will increase, all of a sudden queries
or indexing will congest now and then. If you encounter this, you have no
time to copy an old index to a new one - the copy process also takes
resources, and the cluster may not have enough. You must begin to add nodes
way before capacity limit is reached.

Instead of copying an index, which is a burden, you should consider
managing a bunch of indices. If an old index is too small, just start a new
one which is bigger and has more shards and spans more nodes, and add them
to the existing set of indices. With index alias you can combine many
indices into one index name. This is very powerful.

If you can not estimate the data growth rate, I recommend also to use a
reasonable number of shards from the very start. Say, if you expect 50
servers to run an ES node on, then simply start with 50 shards on a small
number of servers, and add servers over time. You won't have to bother
about shard count for a very long time if you choose such a strategy.

Do not think about rivers, they are not built for such use cases. Rivers
are designed as a "play tool" for fetching data quickly from external
sources, for demo purpose. They are discouraged for serious production use,
they are not very reliable if they run unattended.

Jörg

On Thu, Jun 5, 2014 at 7:33 AM, Todd Nine tnine@apigee.com wrote:

  1. https://github.com/jprante/elasticsearch-knapsack might do what you
    want.

This won't quite work for us. We can't have any down time, so it seems
like an A/B system is more appropriate. What we're currently thinking is
the following.

Each index has 2 aliases, a read and a write alias.

  1. Both read and write aliases point to an initial index. Say shard count
    5 replication 2 (ES is not our canonical data source, so we're ok with
    reconstructing search data)

  2. We detect via monitoring we're going to outgrow an index. We create a
    new index with more shards, and potentially a higher replication depending
    on read load. We then update the write alias to point to both the old and
    new index. All clients will then being dual writes to both indexes.

  3. While we're writing to old and new, some process (maybe a river?) will
    begin copying documents updated < the write alias time from the old index
    to the new index. Ideally, it would be nice if each replica could copy
    only it's local documents into the new index. We'll want to throttle this
    as well. Each node will need additional operational capacity
    to accommodate the dual writes as well as accepting the write of the "old"
    documents. I'm concerned if we push this through too fast, we could cause
    interruptions of service.

  4. Once the copy is completed, the read index is moved to the new index,
    then the old index is removed from the system.

Could such a process be implemented as a plugin? If the work can happen
in parallel across all nodes containing a shard we can increase the
process's speed dramatically. If we have a single worker, like a river, it
might possibly take too long.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/4qO5BZSxWhc/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAKdsXoGcxRGjZr%3DS9%3DSaOZYddWyNxjvFNJF41T_hHe7inoZOwQ%40mail.gmail.com
https://groups.google.com/d/msgid/elasticsearch/CAKdsXoGcxRGjZr%3DS9%3DSaOZYddWyNxjvFNJF41T_hHe7inoZOwQ%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CA%2Byzqf_wMHGZcK%2Bxozm%3Dxz4zh5prKmuob%3DBbXKzrHoVgiOk1wA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


(Jörg Prante) #8

Thanks for raising the questions, I will come back later in more detail.

Just a quick note, the idea about "shards scale write" and "replica scale
read" is correct, but Elasticsearch is also "elastic" which means it
"scales out", by adding node hardware. The shard/replica scale pattern
finds its limits in a node hardware, because shards/replica are tied to
machines, and there are the hard resource constraints, mostly disk I/O and
memory related.

In the end, you can take as a rule of thumb:

  • add replica to scale "read" load
  • add new indices (i.e. new shards) to scale "write" load
  • and add nodes to scale out the whole cluster for both read and write load

More later,

Jörg

On Thu, Jun 5, 2014 at 7:17 PM, Todd Nine tnine@apigee.com wrote:

Hey Jörg,
Thank you for your response. A few questions/points.

In our use cases, the inability to write or read is considered a downtime.
Therefore, I cannot disable writes during expansion. Your alias points
raise
some interesting research I need to do, and I have a few follow up
questions.

Our systems are fully multi tenant. We currently intend to have 1 index
per application. Each application can have a large number of types within
their index. Each cluster could potentially hold 1000 or more
applications/indexes. Most users will never need more than 5 shards.
Some users are huge power users, billions of documents for a type with
several large types. These are the users I am concerned with.

From my understanding and experimentation, Elastic Search has 2 primary
mechanisms for tuning performance to handle the load. First is the shard
count. The higher the shard count, the more writes you can accept. Each
shard has a master which accepts the write, and replicates the write to
it's replicas. For high write throughput, you increase the count of shards
to distribute the load across more nodes. For read throughput, you
increase the replica count. This gives you higher performance on read,
since you now have more than 1 node per shard you can query to get results.

Per your suggestion, rather than than copy the documents from an index
with 5 shards to an index with 10 shards, I can theoretically create a new
index then add it the alias. For instance, I envision this in the following
way.

Aliases:
app1-read
app1-write

Initial creation:

app1-read -> Index: App1-index1 (5 shards, 2 replicas)
app1-write -> Index: App1-index1

User begins to have too much data in App1-index1. The 5 shards are
causing hotspots. The following actions take place.

  1. Create App1-index2 (5 shards, 2 replicas)

  2. Update app1-read -> App1-index1 and App1-index2

  3. Update app1-read -> App1-index1 and App1-index2

I have some uncertainty around this I could use help with.

Once the new index has been added, how are requests routed? For instance,
if I have a document "doc1" in the App1-index1, and I delete it after
adding the new index, is the alias smart enough to update App1-index1, or
will it broadcast the operation to both indexes? In other words, if I
create an alias with 2 or more indexes, will the alias perform routing, or
is it a broadcast to all indexes. How does this scale in practice?

In my current understanding, using an alias on read is simply going to be
the same as if you doubled shards. If you have 10 shard (replication 2)
index, this is functionally equivalent to aliases that aggregate 2 indexes
with 5 shards and 2 replicas each. All 10 shards (or one of their
replicas) would need to be read to aggregate the results, correct?

Lastly, we have a Multi Region requirement. We are by eventually
consistent by design between regions. We want documents written in one
region to be replicated to another for query. All documents are immutable.
This is by design, so we don't get document version collisions between
data centers. What are some current mechanisms in use in production
environments to replicate indexes across regions? I just can't seem to
find any. Rivers was my initial thinking so regions can pull data from
other regions. However if this isn't a good fit, what is?

Thanks guys!
Todd

On Thu, Jun 5, 2014 at 9:21 AM, joergprante@gmail.com <
joergprante@gmail.com> wrote:

The knapsack plugin does not come with a downtime. You can increase
shards on the fly by copying an index over to another index (even on
another cluster). The index should be write disabled during copy though.

Increasing replica level is a very simple command, no index copy required.

It seems you have a slight misconception about controlling replica
shards. You can not start dedicated copy actions only from the replica. (By
setting _preference for search, this works for queries).

Maybe I do not understand your question, but what do you mean by "dual
writes"? And why would you "move" an index?

Please check the index aliases. The concept of index aliases allow
redirecting index names in the API by a simple atomic command.

It will be tough to monitor an outgrowing index since there is no clear
indication of the type "this cluster capacity is full because the index is
too large or overloaded, please add your nodes now". In real life, heaps
will fill up here and there, latency will increase, all of a sudden queries
or indexing will congest now and then. If you encounter this, you have no
time to copy an old index to a new one - the copy process also takes
resources, and the cluster may not have enough. You must begin to add nodes
way before capacity limit is reached.

Instead of copying an index, which is a burden, you should consider
managing a bunch of indices. If an old index is too small, just start a new
one which is bigger and has more shards and spans more nodes, and add them
to the existing set of indices. With index alias you can combine many
indices into one index name. This is very powerful.

If you can not estimate the data growth rate, I recommend also to use a
reasonable number of shards from the very start. Say, if you expect 50
servers to run an ES node on, then simply start with 50 shards on a small
number of servers, and add servers over time. You won't have to bother
about shard count for a very long time if you choose such a strategy.

Do not think about rivers, they are not built for such use cases. Rivers
are designed as a "play tool" for fetching data quickly from external
sources, for demo purpose. They are discouraged for serious production use,
they are not very reliable if they run unattended.

Jörg

On Thu, Jun 5, 2014 at 7:33 AM, Todd Nine tnine@apigee.com wrote:

  1. https://github.com/jprante/elasticsearch-knapsack might do what you
    want.

This won't quite work for us. We can't have any down time, so it seems
like an A/B system is more appropriate. What we're currently thinking is
the following.

Each index has 2 aliases, a read and a write alias.

  1. Both read and write aliases point to an initial index. Say shard
    count 5 replication 2 (ES is not our canonical data source, so we're ok
    with reconstructing search data)

  2. We detect via monitoring we're going to outgrow an index. We create a
    new index with more shards, and potentially a higher replication depending
    on read load. We then update the write alias to point to both the old and
    new index. All clients will then being dual writes to both indexes.

  3. While we're writing to old and new, some process (maybe a river?)
    will begin copying documents updated < the write alias time from the old
    index to the new index. Ideally, it would be nice if each replica could
    copy only it's local documents into the new index. We'll want to throttle
    this as well. Each node will need additional operational capacity
    to accommodate the dual writes as well as accepting the write of the "old"
    documents. I'm concerned if we push this through too fast, we could cause
    interruptions of service.

  4. Once the copy is completed, the read index is moved to the new index,
    then the old index is removed from the system.

Could such a process be implemented as a plugin? If the work can happen
in parallel across all nodes containing a shard we can increase the
process's speed dramatically. If we have a single worker, like a river, it
might possibly take too long.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/4qO5BZSxWhc/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAKdsXoGcxRGjZr%3DS9%3DSaOZYddWyNxjvFNJF41T_hHe7inoZOwQ%40mail.gmail.com
https://groups.google.com/d/msgid/elasticsearch/CAKdsXoGcxRGjZr%3DS9%3DSaOZYddWyNxjvFNJF41T_hHe7inoZOwQ%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CA%2Byzqf_wMHGZcK%2Bxozm%3Dxz4zh5prKmuob%3DBbXKzrHoVgiOk1wA%40mail.gmail.com
https://groups.google.com/d/msgid/elasticsearch/CA%2Byzqf_wMHGZcK%2Bxozm%3Dxz4zh5prKmuob%3DBbXKzrHoVgiOk1wA%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAKdsXoEiKbTqhbYM2eeCCh5Y%2BD%3D1-isbzVPkK7gTeLa4X96spQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


(Todd Nine) #9

Hey Jorg,
Thanks for the reply. We're using Cassandra heavily in production, I'm
very familiar with the scale out out concepts. What we've seen in all our
distributed systems is that at some point, you reach a saturation of your
capacity for a single node. In the case of ES, to me that would seem to be
shard count. Eventually, all 5 shards can become too large for a node to
handle updates and reads efficiently. This can be caused by a high number
of documents or document size, or both. Once we reach this state, that
index is "full" in the sense that the nodes containing these can no longer
continue to service traffic at the rate we need it to. We have 2 options.

  1. Get bigger hardware. We do this occasionally, but not ideal since this
    is a distributed system.

  2. Scale out, as you said. In the case of write throughput it seems that
    we can do this with a pattern of alias + new index, but it's not clear to
    me if that's the right approach. My initial thinking is to define some
    sort of routing that pivots on created date to the new index since that's
    an immutable field. Thoughts?

In the case of read throughput, we an create more replicas. Our systems is
about 50/50 now, some users are even read/write, others are very read
heavy. I'll probably come up with 2 indexing strategies we can apply to an
application's index based on the heuristics from the operations they're
performing.

Thanks for the feedback!
Todd

On Thu, Jun 5, 2014 at 10:55 AM, joergprante@gmail.com <
joergprante@gmail.com> wrote:

Thanks for raising the questions, I will come back later in more detail.

Just a quick note, the idea about "shards scale write" and "replica scale
read" is correct, but Elasticsearch is also "elastic" which means it
"scales out", by adding node hardware. The shard/replica scale pattern
finds its limits in a node hardware, because shards/replica are tied to
machines, and there are the hard resource constraints, mostly disk I/O and
memory related.

In the end, you can take as a rule of thumb:

  • add replica to scale "read" load
  • add new indices (i.e. new shards) to scale "write" load
  • and add nodes to scale out the whole cluster for both read and write load

More later,

Jörg

On Thu, Jun 5, 2014 at 7:17 PM, Todd Nine tnine@apigee.com wrote:

Hey Jörg,
Thank you for your response. A few questions/points.

In our use cases, the inability to write or read is considered a
downtime. Therefore, I cannot disable writes during expansion. Your alias
points raise
some interesting research I need to do, and I have a few follow up
questions.

Our systems are fully multi tenant. We currently intend to have 1 index
per application. Each application can have a large number of types within
their index. Each cluster could potentially hold 1000 or more
applications/indexes. Most users will never need more than 5 shards.
Some users are huge power users, billions of documents for a type with
several large types. These are the users I am concerned with.

From my understanding and experimentation, Elastic Search has 2 primary
mechanisms for tuning performance to handle the load. First is the shard
count. The higher the shard count, the more writes you can accept. Each
shard has a master which accepts the write, and replicates the write to
it's replicas. For high write throughput, you increase the count of shards
to distribute the load across more nodes. For read throughput, you
increase the replica count. This gives you higher performance on read,
since you now have more than 1 node per shard you can query to get results.

Per your suggestion, rather than than copy the documents from an index
with 5 shards to an index with 10 shards, I can theoretically create a new
index then add it the alias. For instance, I envision this in the following
way.

Aliases:
app1-read
app1-write

Initial creation:

app1-read -> Index: App1-index1 (5 shards, 2 replicas)
app1-write -> Index: App1-index1

User begins to have too much data in App1-index1. The 5 shards are
causing hotspots. The following actions take place.

  1. Create App1-index2 (5 shards, 2 replicas)

  2. Update app1-read -> App1-index1 and App1-index2

  3. Update app1-read -> App1-index1 and App1-index2

I have some uncertainty around this I could use help with.

Once the new index has been added, how are requests routed? For
instance, if I have a document "doc1" in the App1-index1, and I delete it
after adding the new index, is the alias smart enough to update
App1-index1, or will it broadcast the operation to both indexes? In other
words, if I create an alias with 2 or more indexes, will the alias perform
routing, or is it a broadcast to all indexes. How does this scale in
practice?

In my current understanding, using an alias on read is simply going to be
the same as if you doubled shards. If you have 10 shard (replication 2)
index, this is functionally equivalent to aliases that aggregate 2 indexes
with 5 shards and 2 replicas each. All 10 shards (or one of their
replicas) would need to be read to aggregate the results, correct?

Lastly, we have a Multi Region requirement. We are by eventually
consistent by design between regions. We want documents written in one
region to be replicated to another for query. All documents are immutable.
This is by design, so we don't get document version collisions between
data centers. What are some current mechanisms in use in production
environments to replicate indexes across regions? I just can't seem to
find any. Rivers was my initial thinking so regions can pull data from
other regions. However if this isn't a good fit, what is?

Thanks guys!
Todd

On Thu, Jun 5, 2014 at 9:21 AM, joergprante@gmail.com <
joergprante@gmail.com> wrote:

The knapsack plugin does not come with a downtime. You can increase
shards on the fly by copying an index over to another index (even on
another cluster). The index should be write disabled during copy though.

Increasing replica level is a very simple command, no index copy
required.

It seems you have a slight misconception about controlling replica
shards. You can not start dedicated copy actions only from the replica. (By
setting _preference for search, this works for queries).

Maybe I do not understand your question, but what do you mean by "dual
writes"? And why would you "move" an index?

Please check the index aliases. The concept of index aliases allow
redirecting index names in the API by a simple atomic command.

It will be tough to monitor an outgrowing index since there is no clear
indication of the type "this cluster capacity is full because the index is
too large or overloaded, please add your nodes now". In real life, heaps
will fill up here and there, latency will increase, all of a sudden queries
or indexing will congest now and then. If you encounter this, you have no
time to copy an old index to a new one - the copy process also takes
resources, and the cluster may not have enough. You must begin to add nodes
way before capacity limit is reached.

Instead of copying an index, which is a burden, you should consider
managing a bunch of indices. If an old index is too small, just start a new
one which is bigger and has more shards and spans more nodes, and add them
to the existing set of indices. With index alias you can combine many
indices into one index name. This is very powerful.

If you can not estimate the data growth rate, I recommend also to use a
reasonable number of shards from the very start. Say, if you expect 50
servers to run an ES node on, then simply start with 50 shards on a small
number of servers, and add servers over time. You won't have to bother
about shard count for a very long time if you choose such a strategy.

Do not think about rivers, they are not built for such use cases. Rivers
are designed as a "play tool" for fetching data quickly from external
sources, for demo purpose. They are discouraged for serious production use,
they are not very reliable if they run unattended.

Jörg

On Thu, Jun 5, 2014 at 7:33 AM, Todd Nine tnine@apigee.com wrote:

  1. https://github.com/jprante/elasticsearch-knapsack might do what
    you want.

This won't quite work for us. We can't have any down time, so it seems
like an A/B system is more appropriate. What we're currently thinking is
the following.

Each index has 2 aliases, a read and a write alias.

  1. Both read and write aliases point to an initial index. Say shard
    count 5 replication 2 (ES is not our canonical data source, so we're ok
    with reconstructing search data)

  2. We detect via monitoring we're going to outgrow an index. We create
    a new index with more shards, and potentially a higher replication
    depending on read load. We then update the write alias to point to both
    the old and new index. All clients will then being dual writes to both
    indexes.

  3. While we're writing to old and new, some process (maybe a river?)
    will begin copying documents updated < the write alias time from the old
    index to the new index. Ideally, it would be nice if each replica could
    copy only it's local documents into the new index. We'll want to throttle
    this as well. Each node will need additional operational capacity
    to accommodate the dual writes as well as accepting the write of the "old"
    documents. I'm concerned if we push this through too fast, we could cause
    interruptions of service.

  4. Once the copy is completed, the read index is moved to the new
    index, then the old index is removed from the system.

Could such a process be implemented as a plugin? If the work can
happen in parallel across all nodes containing a shard we can increase the
process's speed dramatically. If we have a single worker, like a river, it
might possibly take too long.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/4qO5BZSxWhc/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAKdsXoGcxRGjZr%3DS9%3DSaOZYddWyNxjvFNJF41T_hHe7inoZOwQ%40mail.gmail.com
https://groups.google.com/d/msgid/elasticsearch/CAKdsXoGcxRGjZr%3DS9%3DSaOZYddWyNxjvFNJF41T_hHe7inoZOwQ%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CA%2Byzqf_wMHGZcK%2Bxozm%3Dxz4zh5prKmuob%3DBbXKzrHoVgiOk1wA%40mail.gmail.com
https://groups.google.com/d/msgid/elasticsearch/CA%2Byzqf_wMHGZcK%2Bxozm%3Dxz4zh5prKmuob%3DBbXKzrHoVgiOk1wA%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/4qO5BZSxWhc/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAKdsXoEiKbTqhbYM2eeCCh5Y%2BD%3D1-isbzVPkK7gTeLa4X96spQ%40mail.gmail.com
https://groups.google.com/d/msgid/elasticsearch/CAKdsXoEiKbTqhbYM2eeCCh5Y%2BD%3D1-isbzVPkK7gTeLa4X96spQ%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CA%2Byzqf-f%3DDYUrVebgzA7Dd2JVcxRo7wkXBHZdXtf6GZGD2garw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


(Jörg Prante) #10

Yes, routing is very powerful. The general use case is to introduce a
mapping to a large number of shards so you can store parts of data all at
the same shard which is good for locality concepts. For example, combined
with index alias working on filter terms, you can create one big concrete
index, and use segments of it, so many thousands of users can share a
single index.

Another use case for routing might be a time windowed index where each
shard holds a time window. There are many examples around logstash.

The combination of index alias and routing is also known as "shard
overallocation". The concept might look complex first but the other option
would be to create a concrete index for every user which might also be a
waste of resources.

Though some here on this list have managed to run 10000s of shards on a
single node I still find this breathtaking - a few dozens of shards per
node should be ok. Each shard takes some MB on the heap (there are tricks
to reduce this a bit) but a high number of shards takes a handful of
resources even without executing a single query. There might be other
factors worth considering, for example a size limit for a single shard. It
can be quite handy to let ES having move around shards of 1-10 GB instead
of a few 100 GB - it is faster at index recovery or at reallocation time.

Jörg

On Thu, Jun 5, 2014 at 9:44 PM, Todd Nine tnine@apigee.com wrote:

Hey Jorg,
Thanks for the reply. We're using Cassandra heavily in production, I'm
very familiar with the scale out out concepts. What we've seen in all our
distributed systems is that at some point, you reach a saturation of your
capacity for a single node. In the case of ES, to me that would seem to be
shard count. Eventually, all 5 shards can become too large for a node to
handle updates and reads efficiently. This can be caused by a high number
of documents or document size, or both. Once we reach this state, that
index is "full" in the sense that the nodes containing these can no longer
continue to service traffic at the rate we need it to. We have 2 options.

  1. Get bigger hardware. We do this occasionally, but not ideal since this
    is a distributed system.

  2. Scale out, as you said. In the case of write throughput it seems that
    we can do this with a pattern of alias + new index, but it's not clear to
    me if that's the right approach. My initial thinking is to define some
    sort of routing that pivots on created date to the new index since that's
    an immutable field. Thoughts?

In the case of read throughput, we an create more replicas. Our systems
is about 50/50 now, some users are even read/write, others are very read
heavy. I'll probably come up with 2 indexing strategies we can apply to an
application's index based on the heuristics from the operations they're
performing.

Thanks for the feedback!
Todd

On Thu, Jun 5, 2014 at 10:55 AM, joergprante@gmail.com <
joergprante@gmail.com> wrote:

Thanks for raising the questions, I will come back later in more detail.

Just a quick note, the idea about "shards scale write" and "replica scale
read" is correct, but Elasticsearch is also "elastic" which means it
"scales out", by adding node hardware. The shard/replica scale pattern
finds its limits in a node hardware, because shards/replica are tied to
machines, and there are the hard resource constraints, mostly disk I/O and
memory related.

In the end, you can take as a rule of thumb:

  • add replica to scale "read" load
  • add new indices (i.e. new shards) to scale "write" load
  • and add nodes to scale out the whole cluster for both read and write
    load

More later,

Jörg

On Thu, Jun 5, 2014 at 7:17 PM, Todd Nine tnine@apigee.com wrote:

Hey Jörg,
Thank you for your response. A few questions/points.

In our use cases, the inability to write or read is considered a
downtime. Therefore, I cannot disable writes during expansion. Your alias
points raise
some interesting research I need to do, and I have a few follow up
questions.

Our systems are fully multi tenant. We currently intend to have 1 index
per application. Each application can have a large number of types within
their index. Each cluster could potentially hold 1000 or more
applications/indexes. Most users will never need more than 5 shards.
Some users are huge power users, billions of documents for a type with
several large types. These are the users I am concerned with.

From my understanding and experimentation, Elastic Search has 2 primary
mechanisms for tuning performance to handle the load. First is the shard
count. The higher the shard count, the more writes you can accept. Each
shard has a master which accepts the write, and replicates the write to
it's replicas. For high write throughput, you increase the count of shards
to distribute the load across more nodes. For read throughput, you
increase the replica count. This gives you higher performance on read,
since you now have more than 1 node per shard you can query to get results.

Per your suggestion, rather than than copy the documents from an index
with 5 shards to an index with 10 shards, I can theoretically create a new
index then add it the alias. For instance, I envision this in the following
way.

Aliases:
app1-read
app1-write

Initial creation:

app1-read -> Index: App1-index1 (5 shards, 2 replicas)
app1-write -> Index: App1-index1

User begins to have too much data in App1-index1. The 5 shards are
causing hotspots. The following actions take place.

  1. Create App1-index2 (5 shards, 2 replicas)

  2. Update app1-read -> App1-index1 and App1-index2

  3. Update app1-read -> App1-index1 and App1-index2

I have some uncertainty around this I could use help with.

Once the new index has been added, how are requests routed? For
instance, if I have a document "doc1" in the App1-index1, and I delete it
after adding the new index, is the alias smart enough to update
App1-index1, or will it broadcast the operation to both indexes? In other
words, if I create an alias with 2 or more indexes, will the alias perform
routing, or is it a broadcast to all indexes. How does this scale in
practice?

In my current understanding, using an alias on read is simply going to
be the same as if you doubled shards. If you have 10 shard (replication 2)
index, this is functionally equivalent to aliases that aggregate 2 indexes
with 5 shards and 2 replicas each. All 10 shards (or one of their
replicas) would need to be read to aggregate the results, correct?

Lastly, we have a Multi Region requirement. We are by eventually
consistent by design between regions. We want documents written in one
region to be replicated to another for query. All documents are immutable.
This is by design, so we don't get document version collisions between
data centers. What are some current mechanisms in use in production
environments to replicate indexes across regions? I just can't seem to
find any. Rivers was my initial thinking so regions can pull data from
other regions. However if this isn't a good fit, what is?

Thanks guys!
Todd

On Thu, Jun 5, 2014 at 9:21 AM, joergprante@gmail.com <
joergprante@gmail.com> wrote:

The knapsack plugin does not come with a downtime. You can increase
shards on the fly by copying an index over to another index (even on
another cluster). The index should be write disabled during copy though.

Increasing replica level is a very simple command, no index copy
required.

It seems you have a slight misconception about controlling replica
shards. You can not start dedicated copy actions only from the replica. (By
setting _preference for search, this works for queries).

Maybe I do not understand your question, but what do you mean by "dual
writes"? And why would you "move" an index?

Please check the index aliases. The concept of index aliases allow
redirecting index names in the API by a simple atomic command.

It will be tough to monitor an outgrowing index since there is no clear
indication of the type "this cluster capacity is full because the index is
too large or overloaded, please add your nodes now". In real life, heaps
will fill up here and there, latency will increase, all of a sudden queries
or indexing will congest now and then. If you encounter this, you have no
time to copy an old index to a new one - the copy process also takes
resources, and the cluster may not have enough. You must begin to add nodes
way before capacity limit is reached.

Instead of copying an index, which is a burden, you should consider
managing a bunch of indices. If an old index is too small, just start a new
one which is bigger and has more shards and spans more nodes, and add them
to the existing set of indices. With index alias you can combine many
indices into one index name. This is very powerful.

If you can not estimate the data growth rate, I recommend also to use a
reasonable number of shards from the very start. Say, if you expect 50
servers to run an ES node on, then simply start with 50 shards on a small
number of servers, and add servers over time. You won't have to bother
about shard count for a very long time if you choose such a strategy.

Do not think about rivers, they are not built for such use cases.
Rivers are designed as a "play tool" for fetching data quickly from
external sources, for demo purpose. They are discouraged for serious
production use, they are not very reliable if they run unattended.

Jörg

On Thu, Jun 5, 2014 at 7:33 AM, Todd Nine tnine@apigee.com wrote:

  1. https://github.com/jprante/elasticsearch-knapsack might do what
    you want.

This won't quite work for us. We can't have any down time, so it
seems like an A/B system is more appropriate. What we're currently
thinking is the following.

Each index has 2 aliases, a read and a write alias.

  1. Both read and write aliases point to an initial index. Say shard
    count 5 replication 2 (ES is not our canonical data source, so we're ok
    with reconstructing search data)

  2. We detect via monitoring we're going to outgrow an index. We create
    a new index with more shards, and potentially a higher replication
    depending on read load. We then update the write alias to point to both
    the old and new index. All clients will then being dual writes to both
    indexes.

  3. While we're writing to old and new, some process (maybe a river?)
    will begin copying documents updated < the write alias time from the old
    index to the new index. Ideally, it would be nice if each replica could
    copy only it's local documents into the new index. We'll want to throttle
    this as well. Each node will need additional operational capacity
    to accommodate the dual writes as well as accepting the write of the "old"
    documents. I'm concerned if we push this through too fast, we could cause
    interruptions of service.

  4. Once the copy is completed, the read index is moved to the new
    index, then the old index is removed from the system.

Could such a process be implemented as a plugin? If the work can
happen in parallel across all nodes containing a shard we can increase the
process's speed dramatically. If we have a single worker, like a river, it
might possibly take too long.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/4qO5BZSxWhc/unsubscribe
.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAKdsXoGcxRGjZr%3DS9%3DSaOZYddWyNxjvFNJF41T_hHe7inoZOwQ%40mail.gmail.com
https://groups.google.com/d/msgid/elasticsearch/CAKdsXoGcxRGjZr%3DS9%3DSaOZYddWyNxjvFNJF41T_hHe7inoZOwQ%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CA%2Byzqf_wMHGZcK%2Bxozm%3Dxz4zh5prKmuob%3DBbXKzrHoVgiOk1wA%40mail.gmail.com
https://groups.google.com/d/msgid/elasticsearch/CA%2Byzqf_wMHGZcK%2Bxozm%3Dxz4zh5prKmuob%3DBbXKzrHoVgiOk1wA%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/4qO5BZSxWhc/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAKdsXoEiKbTqhbYM2eeCCh5Y%2BD%3D1-isbzVPkK7gTeLa4X96spQ%40mail.gmail.com
https://groups.google.com/d/msgid/elasticsearch/CAKdsXoEiKbTqhbYM2eeCCh5Y%2BD%3D1-isbzVPkK7gTeLa4X96spQ%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CA%2Byzqf-f%3DDYUrVebgzA7Dd2JVcxRo7wkXBHZdXtf6GZGD2garw%40mail.gmail.com
https://groups.google.com/d/msgid/elasticsearch/CA%2Byzqf-f%3DDYUrVebgzA7Dd2JVcxRo7wkXBHZdXtf6GZGD2garw%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAKdsXoEnu5DNdhAhcHuxwCfaG9U8Am5UG415a%2BNc_4hJaDDqFA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


(Todd Nine) #11

Hey guys,
One last question. Does anyone do multi region replication with ES? My
current understanding is that with a multi region cluster, documents will
be routed to the Region with a node that "owns" the shard the document is
being written to. In our use cases, our cluster must survive a WAN outage.
We don't want the latency of the writes or reads crossing the WAN
connection. Our documents are immutable, so we can work with multi region
writes. We simply need to replicate the write to other regions, as well as
the deletes. Are there any examples or implementations of this?

Thanks,
Todd

On Thursday, June 5, 2014 4:11:44 PM UTC-6, Jörg Prante wrote:

Yes, routing is very powerful. The general use case is to introduce a
mapping to a large number of shards so you can store parts of data all at
the same shard which is good for locality concepts. For example, combined
with index alias working on filter terms, you can create one big concrete
index, and use segments of it, so many thousands of users can share a
single index.

Another use case for routing might be a time windowed index where each
shard holds a time window. There are many examples around logstash.

The combination of index alias and routing is also known as "shard
overallocation". The concept might look complex first but the other option
would be to create a concrete index for every user which might also be a
waste of resources.

Though some here on this list have managed to run 10000s of shards on a
single node I still find this breathtaking - a few dozens of shards per
node should be ok. Each shard takes some MB on the heap (there are tricks
to reduce this a bit) but a high number of shards takes a handful of
resources even without executing a single query. There might be other
factors worth considering, for example a size limit for a single shard. It
can be quite handy to let ES having move around shards of 1-10 GB instead
of a few 100 GB - it is faster at index recovery or at reallocation time.

Jörg

On Thu, Jun 5, 2014 at 9:44 PM, Todd Nine <tn...@apigee.com <javascript:>>
wrote:

Hey Jorg,
Thanks for the reply. We're using Cassandra heavily in production, I'm
very familiar with the scale out out concepts. What we've seen in all our
distributed systems is that at some point, you reach a saturation of your
capacity for a single node. In the case of ES, to me that would seem to be
shard count. Eventually, all 5 shards can become too large for a node to
handle updates and reads efficiently. This can be caused by a high number
of documents or document size, or both. Once we reach this state, that
index is "full" in the sense that the nodes containing these can no longer
continue to service traffic at the rate we need it to. We have 2 options.

  1. Get bigger hardware. We do this occasionally, but not ideal since
    this is a distributed system.

  2. Scale out, as you said. In the case of write throughput it seems that
    we can do this with a pattern of alias + new index, but it's not clear to
    me if that's the right approach. My initial thinking is to define some
    sort of routing that pivots on created date to the new index since that's
    an immutable field. Thoughts?

In the case of read throughput, we an create more replicas. Our systems
is about 50/50 now, some users are even read/write, others are very read
heavy. I'll probably come up with 2 indexing strategies we can apply to an
application's index based on the heuristics from the operations they're
performing.

Thanks for the feedback!
Todd

On Thu, Jun 5, 2014 at 10:55 AM, joerg...@gmail.com <javascript:> <
joerg...@gmail.com <javascript:>> wrote:

Thanks for raising the questions, I will come back later in more detail.

Just a quick note, the idea about "shards scale write" and "replica
scale read" is correct, but Elasticsearch is also "elastic" which means it
"scales out", by adding node hardware. The shard/replica scale pattern
finds its limits in a node hardware, because shards/replica are tied to
machines, and there are the hard resource constraints, mostly disk I/O and
memory related.

In the end, you can take as a rule of thumb:

  • add replica to scale "read" load
  • add new indices (i.e. new shards) to scale "write" load
  • and add nodes to scale out the whole cluster for both read and write
    load

More later,

Jörg

On Thu, Jun 5, 2014 at 7:17 PM, Todd Nine <tn...@apigee.com
<javascript:>> wrote:

Hey Jörg,
Thank you for your response. A few questions/points.

In our use cases, the inability to write or read is considered a
downtime. Therefore, I cannot disable writes during expansion. Your alias
points raise
some interesting research I need to do, and I have a few follow up
questions.

Our systems are fully multi tenant. We currently intend to have 1
index per application. Each application can have a large number of types
within their index. Each cluster could potentially hold 1000 or more
applications/indexes. Most users will never need more than 5 shards.
Some users are huge power users, billions of documents for a type with
several large types. These are the users I am concerned with.

From my understanding and experimentation, Elastic Search has 2 primary
mechanisms for tuning performance to handle the load. First is the shard
count. The higher the shard count, the more writes you can accept. Each
shard has a master which accepts the write, and replicates the write to
it's replicas. For high write throughput, you increase the count of shards
to distribute the load across more nodes. For read throughput, you
increase the replica count. This gives you higher performance on read,
since you now have more than 1 node per shard you can query to get results.

Per your suggestion, rather than than copy the documents from an index
with 5 shards to an index with 10 shards, I can theoretically create a new
index then add it the alias. For instance, I envision this in the following
way.

Aliases:
app1-read
app1-write

Initial creation:

app1-read -> Index: App1-index1 (5 shards, 2 replicas)
app1-write -> Index: App1-index1

User begins to have too much data in App1-index1. The 5 shards are
causing hotspots. The following actions take place.

  1. Create App1-index2 (5 shards, 2 replicas)

  2. Update app1-read -> App1-index1 and App1-index2

  3. Update app1-read -> App1-index1 and App1-index2

I have some uncertainty around this I could use help with.

Once the new index has been added, how are requests routed? For
instance, if I have a document "doc1" in the App1-index1, and I delete it
after adding the new index, is the alias smart enough to update
App1-index1, or will it broadcast the operation to both indexes? In other
words, if I create an alias with 2 or more indexes, will the alias perform
routing, or is it a broadcast to all indexes. How does this scale in
practice?

In my current understanding, using an alias on read is simply going to
be the same as if you doubled shards. If you have 10 shard (replication 2)
index, this is functionally equivalent to aliases that aggregate 2 indexes
with 5 shards and 2 replicas each. All 10 shards (or one of their
replicas) would need to be read to aggregate the results, correct?

Lastly, we have a Multi Region requirement. We are by eventually
consistent by design between regions. We want documents written in one
region to be replicated to another for query. All documents are immutable.
This is by design, so we don't get document version collisions between
data centers. What are some current mechanisms in use in production
environments to replicate indexes across regions? I just can't seem to
find any. Rivers was my initial thinking so regions can pull data from
other regions. However if this isn't a good fit, what is?

Thanks guys!
Todd

On Thu, Jun 5, 2014 at 9:21 AM, joerg...@gmail.com <javascript:> <
joerg...@gmail.com <javascript:>> wrote:

The knapsack plugin does not come with a downtime. You can increase
shards on the fly by copying an index over to another index (even on
another cluster). The index should be write disabled during copy though.

Increasing replica level is a very simple command, no index copy
required.

It seems you have a slight misconception about controlling replica
shards. You can not start dedicated copy actions only from the replica. (By
setting _preference for search, this works for queries).

Maybe I do not understand your question, but what do you mean by "dual
writes"? And why would you "move" an index?

Please check the index aliases. The concept of index aliases allow
redirecting index names in the API by a simple atomic command.

It will be tough to monitor an outgrowing index since there is no
clear indication of the type "this cluster capacity is full because the
index is too large or overloaded, please add your nodes now". In real life,
heaps will fill up here and there, latency will increase, all of a sudden
queries or indexing will congest now and then. If you encounter this, you
have no time to copy an old index to a new one - the copy process also
takes resources, and the cluster may not have enough. You must begin to add
nodes way before capacity limit is reached.

Instead of copying an index, which is a burden, you should consider
managing a bunch of indices. If an old index is too small, just start a new
one which is bigger and has more shards and spans more nodes, and add them
to the existing set of indices. With index alias you can combine many
indices into one index name. This is very powerful.

If you can not estimate the data growth rate, I recommend also to use
a reasonable number of shards from the very start. Say, if you expect 50
servers to run an ES node on, then simply start with 50 shards on a small
number of servers, and add servers over time. You won't have to bother
about shard count for a very long time if you choose such a strategy.

Do not think about rivers, they are not built for such use cases.
Rivers are designed as a "play tool" for fetching data quickly from
external sources, for demo purpose. They are discouraged for serious
production use, they are not very reliable if they run unattended.

Jörg

On Thu, Jun 5, 2014 at 7:33 AM, Todd Nine <tn...@apigee.com
<javascript:>> wrote:

  1. https://github.com/jprante/elasticsearch-knapsack might do what
    you want.

This won't quite work for us. We can't have any down time, so it
seems like an A/B system is more appropriate. What we're currently
thinking is the following.

Each index has 2 aliases, a read and a write alias.

  1. Both read and write aliases point to an initial index. Say shard
    count 5 replication 2 (ES is not our canonical data source, so we're ok
    with reconstructing search data)

  2. We detect via monitoring we're going to outgrow an index. We
    create a new index with more shards, and potentially a higher replication
    depending on read load. We then update the write alias to point to both
    the old and new index. All clients will then being dual writes to both
    indexes.

  3. While we're writing to old and new, some process (maybe a river?)
    will begin copying documents updated < the write alias time from the old
    index to the new index. Ideally, it would be nice if each replica could
    copy only it's local documents into the new index. We'll want to throttle
    this as well. Each node will need additional operational capacity
    to accommodate the dual writes as well as accepting the write of the "old"
    documents. I'm concerned if we push this through too fast, we could cause
    interruptions of service.

  4. Once the copy is completed, the read index is moved to the new
    index, then the old index is removed from the system.

Could such a process be implemented as a plugin? If the work can
happen in parallel across all nodes containing a shard we can increase the
process's speed dramatically. If we have a single worker, like a river, it
might possibly take too long.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/4qO5BZSxWhc/unsubscribe
.
To unsubscribe from this group and all its topics, send an email to
elasticsearc...@googlegroups.com <javascript:>.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAKdsXoGcxRGjZr%3DS9%3DSaOZYddWyNxjvFNJF41T_hHe7inoZOwQ%40mail.gmail.com
https://groups.google.com/d/msgid/elasticsearch/CAKdsXoGcxRGjZr%3DS9%3DSaOZYddWyNxjvFNJF41T_hHe7inoZOwQ%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearc...@googlegroups.com <javascript:>.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CA%2Byzqf_wMHGZcK%2Bxozm%3Dxz4zh5prKmuob%3DBbXKzrHoVgiOk1wA%40mail.gmail.com
https://groups.google.com/d/msgid/elasticsearch/CA%2Byzqf_wMHGZcK%2Bxozm%3Dxz4zh5prKmuob%3DBbXKzrHoVgiOk1wA%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/4qO5BZSxWhc/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearc...@googlegroups.com <javascript:>.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAKdsXoEiKbTqhbYM2eeCCh5Y%2BD%3D1-isbzVPkK7gTeLa4X96spQ%40mail.gmail.com
https://groups.google.com/d/msgid/elasticsearch/CAKdsXoEiKbTqhbYM2eeCCh5Y%2BD%3D1-isbzVPkK7gTeLa4X96spQ%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearc...@googlegroups.com <javascript:>.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CA%2Byzqf-f%3DDYUrVebgzA7Dd2JVcxRo7wkXBHZdXtf6GZGD2garw%40mail.gmail.com
https://groups.google.com/d/msgid/elasticsearch/CA%2Byzqf-f%3DDYUrVebgzA7Dd2JVcxRo7wkXBHZdXtf6GZGD2garw%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/a7271f1f-850c-478f-83e8-21b323b07e46%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


(Mark Walkom) #12

There are a few people in the IRC channel that have done it, however,
generally, cross-WAN clusters are not recommended as ES is sensitive to
latency.

You may be better off using the snapshot/restore process, or another
export/import method.

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: markw@campaignmonitor.com
web: www.campaignmonitor.com

On 11 June 2014 03:11, Todd Nine tnine@apigee.com wrote:

Hey guys,
One last question. Does anyone do multi region replication with ES? My
current understanding is that with a multi region cluster, documents will
be routed to the Region with a node that "owns" the shard the document is
being written to. In our use cases, our cluster must survive a WAN outage.
We don't want the latency of the writes or reads crossing the WAN
connection. Our documents are immutable, so we can work with multi region
writes. We simply need to replicate the write to other regions, as well as
the deletes. Are there any examples or implementations of this?

Thanks,
Todd

On Thursday, June 5, 2014 4:11:44 PM UTC-6, Jörg Prante wrote:

Yes, routing is very powerful. The general use case is to introduce a
mapping to a large number of shards so you can store parts of data all at
the same shard which is good for locality concepts. For example, combined
with index alias working on filter terms, you can create one big concrete
index, and use segments of it, so many thousands of users can share a
single index.

Another use case for routing might be a time windowed index where each
shard holds a time window. There are many examples around logstash.

The combination of index alias and routing is also known as "shard
overallocation". The concept might look complex first but the other option
would be to create a concrete index for every user which might also be a
waste of resources.

Though some here on this list have managed to run 10000s of shards on a
single node I still find this breathtaking - a few dozens of shards per
node should be ok. Each shard takes some MB on the heap (there are tricks
to reduce this a bit) but a high number of shards takes a handful of
resources even without executing a single query. There might be other
factors worth considering, for example a size limit for a single shard. It
can be quite handy to let ES having move around shards of 1-10 GB instead
of a few 100 GB - it is faster at index recovery or at reallocation time.

Jörg

On Thu, Jun 5, 2014 at 9:44 PM, Todd Nine tn...@apigee.com wrote:

Hey Jorg,
Thanks for the reply. We're using Cassandra heavily in production,
I'm very familiar with the scale out out concepts. What we've seen in all
our distributed systems is that at some point, you reach a saturation of
your capacity for a single node. In the case of ES, to me that would seem
to be shard count. Eventually, all 5 shards can become too large for a
node to handle updates and reads efficiently. This can be caused by a high
number of documents or document size, or both. Once we reach this state,
that index is "full" in the sense that the nodes containing these can no
longer continue to service traffic at the rate we need it to. We have 2
options.

  1. Get bigger hardware. We do this occasionally, but not ideal since
    this is a distributed system.

  2. Scale out, as you said. In the case of write throughput it seems
    that we can do this with a pattern of alias + new index, but it's not clear
    to me if that's the right approach. My initial thinking is to define some
    sort of routing that pivots on created date to the new index since that's
    an immutable field. Thoughts?

In the case of read throughput, we an create more replicas. Our systems
is about 50/50 now, some users are even read/write, others are very read
heavy. I'll probably come up with 2 indexing strategies we can apply to an
application's index based on the heuristics from the operations they're
performing.

Thanks for the feedback!
Todd

On Thu, Jun 5, 2014 at 10:55 AM, joerg...@gmail.com joerg...@gmail.com
wrote:

Thanks for raising the questions, I will come back later in more detail.

Just a quick note, the idea about "shards scale write" and "replica
scale read" is correct, but Elasticsearch is also "elastic" which means it
"scales out", by adding node hardware. The shard/replica scale pattern
finds its limits in a node hardware, because shards/replica are tied to
machines, and there are the hard resource constraints, mostly disk I/O and
memory related.

In the end, you can take as a rule of thumb:

  • add replica to scale "read" load
  • add new indices (i.e. new shards) to scale "write" load
  • and add nodes to scale out the whole cluster for both read and write
    load

More later,

Jörg

On Thu, Jun 5, 2014 at 7:17 PM, Todd Nine tn...@apigee.com wrote:

Hey Jörg,
Thank you for your response. A few questions/points.

In our use cases, the inability to write or read is considered a
downtime. Therefore, I cannot disable writes during expansion. Your alias
points raise
some interesting research I need to do, and I have a few follow up
questions.

Our systems are fully multi tenant. We currently intend to have 1
index per application. Each application can have a large number of types
within their index. Each cluster could potentially hold 1000 or more
applications/indexes. Most users will never need more than 5 shards.
Some users are huge power users, billions of documents for a type with
several large types. These are the users I am concerned with.

From my understanding and experimentation, Elastic Search has 2
primary mechanisms for tuning performance to handle the load. First is the
shard count. The higher the shard count, the more writes you can accept.
Each shard has a master which accepts the write, and replicates the write
to it's replicas. For high write throughput, you increase the count of
shards to distribute the load across more nodes. For read throughput, you
increase the replica count. This gives you higher performance on read,
since you now have more than 1 node per shard you can query to get results.

Per your suggestion, rather than than copy the documents from an index
with 5 shards to an index with 10 shards, I can theoretically create a new
index then add it the alias. For instance, I envision this in the following
way.

Aliases:
app1-read
app1-write

Initial creation:

app1-read -> Index: App1-index1 (5 shards, 2 replicas)
app1-write -> Index: App1-index1

User begins to have too much data in App1-index1. The 5 shards are
causing hotspots. The following actions take place.

  1. Create App1-index2 (5 shards, 2 replicas)

  2. Update app1-read -> App1-index1 and App1-index2

  3. Update app1-read -> App1-index1 and App1-index2

I have some uncertainty around this I could use help with.

Once the new index has been added, how are requests routed? For
instance, if I have a document "doc1" in the App1-index1, and I delete it
after adding the new index, is the alias smart enough to update
App1-index1, or will it broadcast the operation to both indexes? In other
words, if I create an alias with 2 or more indexes, will the alias perform
routing, or is it a broadcast to all indexes. How does this scale in
practice?

In my current understanding, using an alias on read is simply going to
be the same as if you doubled shards. If you have 10 shard (replication 2)
index, this is functionally equivalent to aliases that aggregate 2 indexes
with 5 shards and 2 replicas each. All 10 shards (or one of their
replicas) would need to be read to aggregate the results, correct?

Lastly, we have a Multi Region requirement. We are by eventually
consistent by design between regions. We want documents written in one
region to be replicated to another for query. All documents are immutable.
This is by design, so we don't get document version collisions between
data centers. What are some current mechanisms in use in production
environments to replicate indexes across regions? I just can't seem to
find any. Rivers was my initial thinking so regions can pull data from
other regions. However if this isn't a good fit, what is?

Thanks guys!
Todd

On Thu, Jun 5, 2014 at 9:21 AM, joerg...@gmail.com <
joerg...@gmail.com> wrote:

The knapsack plugin does not come with a downtime. You can increase
shards on the fly by copying an index over to another index (even on
another cluster). The index should be write disabled during copy though.

Increasing replica level is a very simple command, no index copy
required.

It seems you have a slight misconception about controlling replica
shards. You can not start dedicated copy actions only from the replica. (By
setting _preference for search, this works for queries).

Maybe I do not understand your question, but what do you mean by
"dual writes"? And why would you "move" an index?

Please check the index aliases. The concept of index aliases allow
redirecting index names in the API by a simple atomic command.

It will be tough to monitor an outgrowing index since there is no
clear indication of the type "this cluster capacity is full because the
index is too large or overloaded, please add your nodes now". In real life,
heaps will fill up here and there, latency will increase, all of a sudden
queries or indexing will congest now and then. If you encounter this, you
have no time to copy an old index to a new one - the copy process also
takes resources, and the cluster may not have enough. You must begin to add
nodes way before capacity limit is reached.

Instead of copying an index, which is a burden, you should consider
managing a bunch of indices. If an old index is too small, just start a new
one which is bigger and has more shards and spans more nodes, and add them
to the existing set of indices. With index alias you can combine many
indices into one index name. This is very powerful.

If you can not estimate the data growth rate, I recommend also to use
a reasonable number of shards from the very start. Say, if you expect 50
servers to run an ES node on, then simply start with 50 shards on a small
number of servers, and add servers over time. You won't have to bother
about shard count for a very long time if you choose such a strategy.

Do not think about rivers, they are not built for such use cases.
Rivers are designed as a "play tool" for fetching data quickly from
external sources, for demo purpose. They are discouraged for serious
production use, they are not very reliable if they run unattended.

Jörg

On Thu, Jun 5, 2014 at 7:33 AM, Todd Nine tn...@apigee.com wrote:

  1. https://github.com/jprante/elasticsearch-knapsack might do what
    you want.

This won't quite work for us. We can't have any down time, so it
seems like an A/B system is more appropriate. What we're currently
thinking is the following.

Each index has 2 aliases, a read and a write alias.

  1. Both read and write aliases point to an initial index. Say shard
    count 5 replication 2 (ES is not our canonical data source, so we're ok
    with reconstructing search data)

  2. We detect via monitoring we're going to outgrow an index. We
    create a new index with more shards, and potentially a higher replication
    depending on read load. We then update the write alias to point to both
    the old and new index. All clients will then being dual writes to both
    indexes.

  3. While we're writing to old and new, some process (maybe a river?)
    will begin copying documents updated < the write alias time from the old
    index to the new index. Ideally, it would be nice if each replica could
    copy only it's local documents into the new index. We'll want to throttle
    this as well. Each node will need additional operational capacity
    to accommodate the dual writes as well as accepting the write of the "old"
    documents. I'm concerned if we push this through too fast, we could cause
    interruptions of service.

  4. Once the copy is completed, the read index is moved to the new
    index, then the old index is removed from the system.

Could such a process be implemented as a plugin? If the work can
happen in parallel across all nodes containing a shard we can increase the
process's speed dramatically. If we have a single worker, like a river, it
might possibly take too long.

--
You received this message because you are subscribed to a topic in
the Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit https://groups.google.com/d/
topic/elasticsearch/4qO5BZSxWhc/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearc...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/
msgid/elasticsearch/CAKdsXoGcxRGjZr%3DS9%3DSaOZYddWyNxjvFNJF41T_
hHe7inoZOwQ%40mail.gmail.com
https://groups.google.com/d/msgid/elasticsearch/CAKdsXoGcxRGjZr%3DS9%3DSaOZYddWyNxjvFNJF41T_hHe7inoZOwQ%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearc...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/
msgid/elasticsearch/CA%2Byzqf_wMHGZcK%2Bxozm%3Dxz4zh5prKmuob%
3DBbXKzrHoVgiOk1wA%40mail.gmail.com
https://groups.google.com/d/msgid/elasticsearch/CA%2Byzqf_wMHGZcK%2Bxozm%3Dxz4zh5prKmuob%3DBbXKzrHoVgiOk1wA%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit https://groups.google.com/d/
topic/elasticsearch/4qO5BZSxWhc/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearc...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/
msgid/elasticsearch/CAKdsXoEiKbTqhbYM2eeCCh5Y%2BD%
3D1-isbzVPkK7gTeLa4X96spQ%40mail.gmail.com
https://groups.google.com/d/msgid/elasticsearch/CAKdsXoEiKbTqhbYM2eeCCh5Y%2BD%3D1-isbzVPkK7gTeLa4X96spQ%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearc...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/
msgid/elasticsearch/CA%2Byzqf-f%3DDYUrVebgzA7Dd2JVcxRo7wkXBHZd
Xtf6GZGD2garw%40mail.gmail.com
https://groups.google.com/d/msgid/elasticsearch/CA%2Byzqf-f%3DDYUrVebgzA7Dd2JVcxRo7wkXBHZdXtf6GZGD2garw%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/a7271f1f-850c-478f-83e8-21b323b07e46%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/a7271f1f-850c-478f-83e8-21b323b07e46%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAEM624aQ0jfM4SRLrAB19PDTPhkOA%3DnZJUJhW3U1QkuRcTa4Jw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


(Todd Nine) #13

Hey Mark,
Thanks for this. It seems like our best bet will be to manage indexes
the same across all regions, since they're really mirrors. Since our
documents are immutable, we'll just queue them up for each region, which
will insert or delete them into their index in the region. It's the only
solution I can think of, since we want to continue processing when WAN
connections go down.

On Tue, Jun 10, 2014 at 3:24 PM, Mark Walkom markw@campaignmonitor.com
wrote:

There are a few people in the IRC channel that have done it, however,
generally, cross-WAN clusters are not recommended as ES is sensitive to
latency.

You may be better off using the snapshot/restore process, or another
export/import method.

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: markw@campaignmonitor.com
web: www.campaignmonitor.com

On 11 June 2014 03:11, Todd Nine tnine@apigee.com wrote:

Hey guys,
One last question. Does anyone do multi region replication with ES?
My current understanding is that with a multi region cluster, documents
will be routed to the Region with a node that "owns" the shard the document
is being written to. In our use cases, our cluster must survive a WAN
outage. We don't want the latency of the writes or reads crossing the WAN
connection. Our documents are immutable, so we can work with multi region
writes. We simply need to replicate the write to other regions, as well as
the deletes. Are there any examples or implementations of this?

Thanks,
Todd

On Thursday, June 5, 2014 4:11:44 PM UTC-6, Jörg Prante wrote:

Yes, routing is very powerful. The general use case is to introduce a
mapping to a large number of shards so you can store parts of data all at
the same shard which is good for locality concepts. For example, combined
with index alias working on filter terms, you can create one big concrete
index, and use segments of it, so many thousands of users can share a
single index.

Another use case for routing might be a time windowed index where each
shard holds a time window. There are many examples around logstash.

The combination of index alias and routing is also known as "shard
overallocation". The concept might look complex first but the other option
would be to create a concrete index for every user which might also be a
waste of resources.

Though some here on this list have managed to run 10000s of shards on a
single node I still find this breathtaking - a few dozens of shards per
node should be ok. Each shard takes some MB on the heap (there are tricks
to reduce this a bit) but a high number of shards takes a handful of
resources even without executing a single query. There might be other
factors worth considering, for example a size limit for a single shard. It
can be quite handy to let ES having move around shards of 1-10 GB instead
of a few 100 GB - it is faster at index recovery or at reallocation time.

Jörg

On Thu, Jun 5, 2014 at 9:44 PM, Todd Nine tn...@apigee.com wrote:

Hey Jorg,
Thanks for the reply. We're using Cassandra heavily in production,
I'm very familiar with the scale out out concepts. What we've seen in all
our distributed systems is that at some point, you reach a saturation of
your capacity for a single node. In the case of ES, to me that would seem
to be shard count. Eventually, all 5 shards can become too large for a
node to handle updates and reads efficiently. This can be caused by a high
number of documents or document size, or both. Once we reach this state,
that index is "full" in the sense that the nodes containing these can no
longer continue to service traffic at the rate we need it to. We have 2
options.

  1. Get bigger hardware. We do this occasionally, but not ideal since
    this is a distributed system.

  2. Scale out, as you said. In the case of write throughput it seems
    that we can do this with a pattern of alias + new index, but it's not clear
    to me if that's the right approach. My initial thinking is to define some
    sort of routing that pivots on created date to the new index since that's
    an immutable field. Thoughts?

In the case of read throughput, we an create more replicas. Our
systems is about 50/50 now, some users are even read/write, others are very
read heavy. I'll probably come up with 2 indexing strategies we can apply
to an application's index based on the heuristics from the operations
they're performing.

Thanks for the feedback!
Todd

On Thu, Jun 5, 2014 at 10:55 AM, joerg...@gmail.com <joerg...@gmail.com

wrote:

Thanks for raising the questions, I will come back later in more
detail.

Just a quick note, the idea about "shards scale write" and "replica
scale read" is correct, but Elasticsearch is also "elastic" which means it
"scales out", by adding node hardware. The shard/replica scale pattern
finds its limits in a node hardware, because shards/replica are tied to
machines, and there are the hard resource constraints, mostly disk I/O and
memory related.

In the end, you can take as a rule of thumb:

  • add replica to scale "read" load
  • add new indices (i.e. new shards) to scale "write" load
  • and add nodes to scale out the whole cluster for both read and write
    load

More later,

Jörg

On Thu, Jun 5, 2014 at 7:17 PM, Todd Nine tn...@apigee.com wrote:

Hey Jörg,
Thank you for your response. A few questions/points.

In our use cases, the inability to write or read is considered a
downtime. Therefore, I cannot disable writes during expansion. Your alias
points raise
some interesting research I need to do, and I have a few follow up
questions.

Our systems are fully multi tenant. We currently intend to have 1
index per application. Each application can have a large number of types
within their index. Each cluster could potentially hold 1000 or more
applications/indexes. Most users will never need more than 5 shards.
Some users are huge power users, billions of documents for a type with
several large types. These are the users I am concerned with.

From my understanding and experimentation, Elastic Search has 2
primary mechanisms for tuning performance to handle the load. First is the
shard count. The higher the shard count, the more writes you can accept.
Each shard has a master which accepts the write, and replicates the write
to it's replicas. For high write throughput, you increase the count of
shards to distribute the load across more nodes. For read throughput, you
increase the replica count. This gives you higher performance on read,
since you now have more than 1 node per shard you can query to get results.

Per your suggestion, rather than than copy the documents from an
index with 5 shards to an index with 10 shards, I can theoretically create
a new index then add it the alias. For instance, I envision this in the
following way.

Aliases:
app1-read
app1-write

Initial creation:

app1-read -> Index: App1-index1 (5 shards, 2 replicas)
app1-write -> Index: App1-index1

User begins to have too much data in App1-index1. The 5 shards are
causing hotspots. The following actions take place.

  1. Create App1-index2 (5 shards, 2 replicas)

  2. Update app1-read -> App1-index1 and App1-index2

  3. Update app1-read -> App1-index1 and App1-index2

I have some uncertainty around this I could use help with.

Once the new index has been added, how are requests routed? For
instance, if I have a document "doc1" in the App1-index1, and I delete it
after adding the new index, is the alias smart enough to update
App1-index1, or will it broadcast the operation to both indexes? In other
words, if I create an alias with 2 or more indexes, will the alias perform
routing, or is it a broadcast to all indexes. How does this scale in
practice?

In my current understanding, using an alias on read is simply going
to be the same as if you doubled shards. If you have 10 shard (replication
2) index, this is functionally equivalent to aliases that aggregate 2
indexes with 5 shards and 2 replicas each. All 10 shards (or one of their
replicas) would need to be read to aggregate the results, correct?

Lastly, we have a Multi Region requirement. We are by eventually
consistent by design between regions. We want documents written in one
region to be replicated to another for query. All documents are immutable.
This is by design, so we don't get document version collisions between
data centers. What are some current mechanisms in use in production
environments to replicate indexes across regions? I just can't seem to
find any. Rivers was my initial thinking so regions can pull data from
other regions. However if this isn't a good fit, what is?

Thanks guys!
Todd

On Thu, Jun 5, 2014 at 9:21 AM, joerg...@gmail.com <
joerg...@gmail.com> wrote:

The knapsack plugin does not come with a downtime. You can increase
shards on the fly by copying an index over to another index (even on
another cluster). The index should be write disabled during copy though.

Increasing replica level is a very simple command, no index copy
required.

It seems you have a slight misconception about controlling replica
shards. You can not start dedicated copy actions only from the replica. (By
setting _preference for search, this works for queries).

Maybe I do not understand your question, but what do you mean by
"dual writes"? And why would you "move" an index?

Please check the index aliases. The concept of index aliases allow
redirecting index names in the API by a simple atomic command.

It will be tough to monitor an outgrowing index since there is no
clear indication of the type "this cluster capacity is full because the
index is too large or overloaded, please add your nodes now". In real life,
heaps will fill up here and there, latency will increase, all of a sudden
queries or indexing will congest now and then. If you encounter this, you
have no time to copy an old index to a new one - the copy process also
takes resources, and the cluster may not have enough. You must begin to add
nodes way before capacity limit is reached.

Instead of copying an index, which is a burden, you should consider
managing a bunch of indices. If an old index is too small, just start a new
one which is bigger and has more shards and spans more nodes, and add them
to the existing set of indices. With index alias you can combine many
indices into one index name. This is very powerful.

If you can not estimate the data growth rate, I recommend also to
use a reasonable number of shards from the very start. Say, if you expect
50 servers to run an ES node on, then simply start with 50 shards on a
small number of servers, and add servers over time. You won't have to
bother about shard count for a very long time if you choose such a strategy.

Do not think about rivers, they are not built for such use cases.
Rivers are designed as a "play tool" for fetching data quickly from
external sources, for demo purpose. They are discouraged for serious
production use, they are not very reliable if they run unattended.

Jörg

On Thu, Jun 5, 2014 at 7:33 AM, Todd Nine tn...@apigee.com wrote:

  1. https://github.com/jprante/elasticsearch-knapsack might do
    what you want.

This won't quite work for us. We can't have any down time, so it
seems like an A/B system is more appropriate. What we're currently
thinking is the following.

Each index has 2 aliases, a read and a write alias.

  1. Both read and write aliases point to an initial index. Say shard
    count 5 replication 2 (ES is not our canonical data source, so we're ok
    with reconstructing search data)

  2. We detect via monitoring we're going to outgrow an index. We
    create a new index with more shards, and potentially a higher replication
    depending on read load. We then update the write alias to point to both
    the old and new index. All clients will then being dual writes to both
    indexes.

  3. While we're writing to old and new, some process (maybe a
    river?) will begin copying documents updated < the write alias time from
    the old index to the new index. Ideally, it would be nice if each replica
    could copy only it's local documents into the new index. We'll want to
    throttle this as well. Each node will need additional operational capacity
    to accommodate the dual writes as well as accepting the write of the "old"
    documents. I'm concerned if we push this through too fast, we could cause
    interruptions of service.

  4. Once the copy is completed, the read index is moved to the new
    index, then the old index is removed from the system.

Could such a process be implemented as a plugin? If the work can
happen in parallel across all nodes containing a shard we can increase the
process's speed dramatically. If we have a single worker, like a river, it
might possibly take too long.

--
You received this message because you are subscribed to a topic in
the Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit https://groups.google.com/d/
topic/elasticsearch/4qO5BZSxWhc/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearc...@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/
CAKdsXoGcxRGjZr%3DS9%3DSaOZYddWyNxjvFNJF41T_
hHe7inoZOwQ%40mail.gmail.com
https://groups.google.com/d/msgid/elasticsearch/CAKdsXoGcxRGjZr%3DS9%3DSaOZYddWyNxjvFNJF41T_hHe7inoZOwQ%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it,
send an email to elasticsearc...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/
msgid/elasticsearch/CA%2Byzqf_wMHGZcK%2Bxozm%3Dxz4zh5prKmuob%
3DBbXKzrHoVgiOk1wA%40mail.gmail.com
https://groups.google.com/d/msgid/elasticsearch/CA%2Byzqf_wMHGZcK%2Bxozm%3Dxz4zh5prKmuob%3DBbXKzrHoVgiOk1wA%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit https://groups.google.com/d/
topic/elasticsearch/4qO5BZSxWhc/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearc...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/
msgid/elasticsearch/CAKdsXoEiKbTqhbYM2eeCCh5Y%2BD%
3D1-isbzVPkK7gTeLa4X96spQ%40mail.gmail.com
https://groups.google.com/d/msgid/elasticsearch/CAKdsXoEiKbTqhbYM2eeCCh5Y%2BD%3D1-isbzVPkK7gTeLa4X96spQ%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearc...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/
msgid/elasticsearch/CA%2Byzqf-f%3DDYUrVebgzA7Dd2JVcxRo7wkXBHZd
Xtf6GZGD2garw%40mail.gmail.com
https://groups.google.com/d/msgid/elasticsearch/CA%2Byzqf-f%3DDYUrVebgzA7Dd2JVcxRo7wkXBHZdXtf6GZGD2garw%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.

To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/a7271f1f-850c-478f-83e8-21b323b07e46%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/a7271f1f-850c-478f-83e8-21b323b07e46%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/4qO5BZSxWhc/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAEM624aQ0jfM4SRLrAB19PDTPhkOA%3DnZJUJhW3U1QkuRcTa4Jw%40mail.gmail.com
https://groups.google.com/d/msgid/elasticsearch/CAEM624aQ0jfM4SRLrAB19PDTPhkOA%3DnZJUJhW3U1QkuRcTa4Jw%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CA%2Byzqf9dYjwNL8ZdFN19sB_7-mFP%3D-YfANvfrO2e92iZFYsNKw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


(Mark Walkom) #14

You could look at using a queuing system, like rabbitmq, where your
application drops the data into, then have a logstash instance in each DC
that pulls off the queue and pushes into ES.
That way you can easily handle the replication of the data to multiple
endpoints within rabbitmq.

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: markw@campaignmonitor.com
web: www.campaignmonitor.com

On 11 June 2014 08:54, Todd Nine tnine@apigee.com wrote:

Hey Mark,
Thanks for this. It seems like our best bet will be to manage indexes
the same across all regions, since they're really mirrors. Since our
documents are immutable, we'll just queue them up for each region, which
will insert or delete them into their index in the region. It's the only
solution I can think of, since we want to continue processing when WAN
connections go down.

On Tue, Jun 10, 2014 at 3:24 PM, Mark Walkom markw@campaignmonitor.com
wrote:

There are a few people in the IRC channel that have done it, however,
generally, cross-WAN clusters are not recommended as ES is sensitive to
latency.

You may be better off using the snapshot/restore process, or another
export/import method.

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: markw@campaignmonitor.com
web: www.campaignmonitor.com

On 11 June 2014 03:11, Todd Nine tnine@apigee.com wrote:

Hey guys,
One last question. Does anyone do multi region replication with ES?
My current understanding is that with a multi region cluster, documents
will be routed to the Region with a node that "owns" the shard the document
is being written to. In our use cases, our cluster must survive a WAN
outage. We don't want the latency of the writes or reads crossing the WAN
connection. Our documents are immutable, so we can work with multi region
writes. We simply need to replicate the write to other regions, as well as
the deletes. Are there any examples or implementations of this?

Thanks,
Todd

On Thursday, June 5, 2014 4:11:44 PM UTC-6, Jörg Prante wrote:

Yes, routing is very powerful. The general use case is to introduce a
mapping to a large number of shards so you can store parts of data all at
the same shard which is good for locality concepts. For example, combined
with index alias working on filter terms, you can create one big concrete
index, and use segments of it, so many thousands of users can share a
single index.

Another use case for routing might be a time windowed index where each
shard holds a time window. There are many examples around logstash.

The combination of index alias and routing is also known as "shard
overallocation". The concept might look complex first but the other option
would be to create a concrete index for every user which might also be a
waste of resources.

Though some here on this list have managed to run 10000s of shards on a
single node I still find this breathtaking - a few dozens of shards per
node should be ok. Each shard takes some MB on the heap (there are tricks
to reduce this a bit) but a high number of shards takes a handful of
resources even without executing a single query. There might be other
factors worth considering, for example a size limit for a single shard. It
can be quite handy to let ES having move around shards of 1-10 GB instead
of a few 100 GB - it is faster at index recovery or at reallocation time.

Jörg

On Thu, Jun 5, 2014 at 9:44 PM, Todd Nine tn...@apigee.com wrote:

Hey Jorg,
Thanks for the reply. We're using Cassandra heavily in production,
I'm very familiar with the scale out out concepts. What we've seen in all
our distributed systems is that at some point, you reach a saturation of
your capacity for a single node. In the case of ES, to me that would seem
to be shard count. Eventually, all 5 shards can become too large for a
node to handle updates and reads efficiently. This can be caused by a high
number of documents or document size, or both. Once we reach this state,
that index is "full" in the sense that the nodes containing these can no
longer continue to service traffic at the rate we need it to. We have 2
options.

  1. Get bigger hardware. We do this occasionally, but not ideal since
    this is a distributed system.

  2. Scale out, as you said. In the case of write throughput it seems
    that we can do this with a pattern of alias + new index, but it's not clear
    to me if that's the right approach. My initial thinking is to define some
    sort of routing that pivots on created date to the new index since that's
    an immutable field. Thoughts?

In the case of read throughput, we an create more replicas. Our
systems is about 50/50 now, some users are even read/write, others are very
read heavy. I'll probably come up with 2 indexing strategies we can apply
to an application's index based on the heuristics from the operations
they're performing.

Thanks for the feedback!
Todd

On Thu, Jun 5, 2014 at 10:55 AM, joerg...@gmail.com <
joerg...@gmail.com> wrote:

Thanks for raising the questions, I will come back later in more
detail.

Just a quick note, the idea about "shards scale write" and "replica
scale read" is correct, but Elasticsearch is also "elastic" which means it
"scales out", by adding node hardware. The shard/replica scale pattern
finds its limits in a node hardware, because shards/replica are tied to
machines, and there are the hard resource constraints, mostly disk I/O and
memory related.

In the end, you can take as a rule of thumb:

  • add replica to scale "read" load
  • add new indices (i.e. new shards) to scale "write" load
  • and add nodes to scale out the whole cluster for both read and
    write load

More later,

Jörg

On Thu, Jun 5, 2014 at 7:17 PM, Todd Nine tn...@apigee.com wrote:

Hey Jörg,
Thank you for your response. A few questions/points.

In our use cases, the inability to write or read is considered a
downtime. Therefore, I cannot disable writes during expansion. Your alias
points raise
some interesting research I need to do, and I have a few follow up
questions.

Our systems are fully multi tenant. We currently intend to have 1
index per application. Each application can have a large number of types
within their index. Each cluster could potentially hold 1000 or more
applications/indexes. Most users will never need more than 5 shards.
Some users are huge power users, billions of documents for a type with
several large types. These are the users I am concerned with.

From my understanding and experimentation, Elastic Search has 2
primary mechanisms for tuning performance to handle the load. First is the
shard count. The higher the shard count, the more writes you can accept.
Each shard has a master which accepts the write, and replicates the write
to it's replicas. For high write throughput, you increase the count of
shards to distribute the load across more nodes. For read throughput, you
increase the replica count. This gives you higher performance on read,
since you now have more than 1 node per shard you can query to get results.

Per your suggestion, rather than than copy the documents from an
index with 5 shards to an index with 10 shards, I can theoretically create
a new index then add it the alias. For instance, I envision this in the
following way.

Aliases:
app1-read
app1-write

Initial creation:

app1-read -> Index: App1-index1 (5 shards, 2 replicas)
app1-write -> Index: App1-index1

User begins to have too much data in App1-index1. The 5 shards are
causing hotspots. The following actions take place.

  1. Create App1-index2 (5 shards, 2 replicas)

  2. Update app1-read -> App1-index1 and App1-index2

  3. Update app1-read -> App1-index1 and App1-index2

I have some uncertainty around this I could use help with.

Once the new index has been added, how are requests routed? For
instance, if I have a document "doc1" in the App1-index1, and I delete it
after adding the new index, is the alias smart enough to update
App1-index1, or will it broadcast the operation to both indexes? In other
words, if I create an alias with 2 or more indexes, will the alias perform
routing, or is it a broadcast to all indexes. How does this scale in
practice?

In my current understanding, using an alias on read is simply going
to be the same as if you doubled shards. If you have 10 shard (replication
2) index, this is functionally equivalent to aliases that aggregate 2
indexes with 5 shards and 2 replicas each. All 10 shards (or one of their
replicas) would need to be read to aggregate the results, correct?

Lastly, we have a Multi Region requirement. We are by eventually
consistent by design between regions. We want documents written in one
region to be replicated to another for query. All documents are immutable.
This is by design, so we don't get document version collisions between
data centers. What are some current mechanisms in use in production
environments to replicate indexes across regions? I just can't seem to
find any. Rivers was my initial thinking so regions can pull data from
other regions. However if this isn't a good fit, what is?

Thanks guys!
Todd

On Thu, Jun 5, 2014 at 9:21 AM, joerg...@gmail.com <
joerg...@gmail.com> wrote:

The knapsack plugin does not come with a downtime. You can increase
shards on the fly by copying an index over to another index (even on
another cluster). The index should be write disabled during copy though.

Increasing replica level is a very simple command, no index copy
required.

It seems you have a slight misconception about controlling replica
shards. You can not start dedicated copy actions only from the replica. (By
setting _preference for search, this works for queries).

Maybe I do not understand your question, but what do you mean by
"dual writes"? And why would you "move" an index?

Please check the index aliases. The concept of index aliases allow
redirecting index names in the API by a simple atomic command.

It will be tough to monitor an outgrowing index since there is no
clear indication of the type "this cluster capacity is full because the
index is too large or overloaded, please add your nodes now". In real life,
heaps will fill up here and there, latency will increase, all of a sudden
queries or indexing will congest now and then. If you encounter this, you
have no time to copy an old index to a new one - the copy process also
takes resources, and the cluster may not have enough. You must begin to add
nodes way before capacity limit is reached.

Instead of copying an index, which is a burden, you should consider
managing a bunch of indices. If an old index is too small, just start a new
one which is bigger and has more shards and spans more nodes, and add them
to the existing set of indices. With index alias you can combine many
indices into one index name. This is very powerful.

If you can not estimate the data growth rate, I recommend also to
use a reasonable number of shards from the very start. Say, if you expect
50 servers to run an ES node on, then simply start with 50 shards on a
small number of servers, and add servers over time. You won't have to
bother about shard count for a very long time if you choose such a strategy.

Do not think about rivers, they are not built for such use cases.
Rivers are designed as a "play tool" for fetching data quickly from
external sources, for demo purpose. They are discouraged for serious
production use, they are not very reliable if they run unattended.

Jörg

On Thu, Jun 5, 2014 at 7:33 AM, Todd Nine tn...@apigee.com wrote:

  1. https://github.com/jprante/elasticsearch-knapsack might do
    what you want.

This won't quite work for us. We can't have any down time, so it
seems like an A/B system is more appropriate. What we're currently
thinking is the following.

Each index has 2 aliases, a read and a write alias.

  1. Both read and write aliases point to an initial index. Say
    shard count 5 replication 2 (ES is not our canonical data source, so we're
    ok with reconstructing search data)

  2. We detect via monitoring we're going to outgrow an index. We
    create a new index with more shards, and potentially a higher replication
    depending on read load. We then update the write alias to point to both
    the old and new index. All clients will then being dual writes to both
    indexes.

  3. While we're writing to old and new, some process (maybe a
    river?) will begin copying documents updated < the write alias time from
    the old index to the new index. Ideally, it would be nice if each replica
    could copy only it's local documents into the new index. We'll want to
    throttle this as well. Each node will need additional operational capacity
    to accommodate the dual writes as well as accepting the write of the "old"
    documents. I'm concerned if we push this through too fast, we could cause
    interruptions of service.

  4. Once the copy is completed, the read index is moved to the new
    index, then the old index is removed from the system.

Could such a process be implemented as a plugin? If the work can
happen in parallel across all nodes containing a shard we can increase the
process's speed dramatically. If we have a single worker, like a river, it
might possibly take too long.

--
You received this message because you are subscribed to a topic in
the Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit https://groups.google.com/d/
topic/elasticsearch/4qO5BZSxWhc/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearc...@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/
CAKdsXoGcxRGjZr%3DS9%3DSaOZYddWyNxjvFNJF41T_
hHe7inoZOwQ%40mail.gmail.com
https://groups.google.com/d/msgid/elasticsearch/CAKdsXoGcxRGjZr%3DS9%3DSaOZYddWyNxjvFNJF41T_hHe7inoZOwQ%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it,
send an email to elasticsearc...@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CA%2Byzqf_
wMHGZcK%2Bxozm%3Dxz4zh5prKmuob%3DBbXKzrHoVgiOk1wA%40mail.gmail.com
https://groups.google.com/d/msgid/elasticsearch/CA%2Byzqf_wMHGZcK%2Bxozm%3Dxz4zh5prKmuob%3DBbXKzrHoVgiOk1wA%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in
the Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit https://groups.google.com/d/
topic/elasticsearch/4qO5BZSxWhc/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearc...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/
msgid/elasticsearch/CAKdsXoEiKbTqhbYM2eeCCh5Y%2BD%
3D1-isbzVPkK7gTeLa4X96spQ%40mail.gmail.com
https://groups.google.com/d/msgid/elasticsearch/CAKdsXoEiKbTqhbYM2eeCCh5Y%2BD%3D1-isbzVPkK7gTeLa4X96spQ%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearc...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/
msgid/elasticsearch/CA%2Byzqf-f%3DDYUrVebgzA7Dd2JVcxRo7wkXBHZd
Xtf6GZGD2garw%40mail.gmail.com
https://groups.google.com/d/msgid/elasticsearch/CA%2Byzqf-f%3DDYUrVebgzA7Dd2JVcxRo7wkXBHZdXtf6GZGD2garw%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearch+unsubscribe@googlegroups.com.

To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/a7271f1f-850c-478f-83e8-21b323b07e46%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/a7271f1f-850c-478f-83e8-21b323b07e46%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/4qO5BZSxWhc/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAEM624aQ0jfM4SRLrAB19PDTPhkOA%3DnZJUJhW3U1QkuRcTa4Jw%40mail.gmail.com
https://groups.google.com/d/msgid/elasticsearch/CAEM624aQ0jfM4SRLrAB19PDTPhkOA%3DnZJUJhW3U1QkuRcTa4Jw%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CA%2Byzqf9dYjwNL8ZdFN19sB_7-mFP%3D-YfANvfrO2e92iZFYsNKw%40mail.gmail.com
https://groups.google.com/d/msgid/elasticsearch/CA%2Byzqf9dYjwNL8ZdFN19sB_7-mFP%3D-YfANvfrO2e92iZFYsNKw%40mail.gmail.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAEM624b5eA9dCvdZgg5TciSRgqbNf3Y1QGzT8LKiW6jSbbKbPQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


(system) #15