Multy-tenancy - level of service garantee

Hi everyone,

After my precedent question
https://groups.google.com/forum/?fromgroups#!topic/elasticsearch/US-BA4R_Qdc
regarding examples of clusters in production, I am wondering about
multy-tenancy and garantee of service in Elasticsearch :

Multy-tenant cluster : Is there a way to garantee a level of service /
capacity planning for each tenant using the cluster (its own indexes) ?

Thanks,

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/7f99a562-ae18-447f-b227-cd145483dcf3%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

How do you guarantee a level of service provided any other way? Redundancy
and smart planning and design.

It's no different with ES.

On 13 February 2015 at 01:47, rondelvictor@gmail.com wrote:

Hi everyone,

After my precedent question
https://groups.google.com/forum/?fromgroups#!topic/elasticsearch/US-BA4R_Qdc
regarding examples of clusters in production, I am wondering about
multy-tenancy and garantee of service in Elasticsearch :

Multy-tenant cluster : Is there a way to garantee a level of service
/ capacity planning for each tenant using the cluster (its own indexes)
?

Thanks,

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/7f99a562-ae18-447f-b227-cd145483dcf3%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/7f99a562-ae18-447f-b227-cd145483dcf3%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAEYi1X96%3DWeOwcvoY39r0c9y5PPoZ86K8Ab3MrwKSYdC%3Dzc4vg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Regarding shards RAM allocation :

  • Since each shard comes with a Lucene instance :
  • If I have 2 shards, each belonging to a different index, each index
    being used by a different application.
    Given that each shard's Lucene highly uses OS cache, how can I certify
    that each Lucene will have enough OS cache for its magic to perform?

Thanks,

Le jeudi 12 février 2015 22:44:29 UTC+1, Mark Walkom a écrit :

How do you guarantee a level of service provided any other way? Redundancy
and smart planning and design.

It's no different with ES.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/5ca10ab5-8e25-4072-828f-7a4fafb3afdf%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Test :slight_smile:

But the reality is, that unless you have enough memory for the OS to cache
the whole thing in memory then it'll never happen. But do you really want
to do this?

On 14 February 2015 at 00:11, rondelvictor@gmail.com wrote:

Regarding shards RAM allocation :

  • Since each shard comes with a Lucene instance :
  • If I have 2 shards, each belonging to a different index, each index
    being used by a different application.
    Given that each shard's Lucene highly uses OS cache, how can I certify
    that each Lucene will have enough OS cache for its magic to perform?

Thanks,

Le jeudi 12 février 2015 22:44:29 UTC+1, Mark Walkom a écrit :

How do you guarantee a level of service provided any other way?
Redundancy and smart planning and design.

It's no different with ES.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/5ca10ab5-8e25-4072-828f-7a4fafb3afdf%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/5ca10ab5-8e25-4072-828f-7a4fafb3afdf%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAEYi1X8jCH8jhtBXkc35G72aeK7etLmwHUbnjwr3nnL5p14H%2BQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

  • "Test :)"

Ok!

  • "unless you have enough memory for the OS to cache the whole thing in
    memory then it'll never happen"

What exactly would never happen? Like, if the two shards combined are too
big for the OS cache, Lucene wouldn't use the cache at all?

  • "But do you really want to do this?"

Well, I have three machines, three applications with one index each, and
they all want two replicas...

2015-02-16 9:06 GMT+01:00 Mark Walkom markwalkom@gmail.com:

Test :slight_smile:

But the reality is, that unless you have enough memory for the OS to cache
the whole thing in memory then it'll never happen. But do you really want
to do this?

On 14 February 2015 at 00:11, rondelvictor@gmail.com wrote:

Regarding shards RAM allocation :

  • Since each shard comes with a Lucene instance :
  • If I have 2 shards, each belonging to a different index, each index
    being used by a different application.
    Given that each shard's Lucene highly uses OS cache, how can I
    certify that each Lucene will have enough OS cache for its magic to perform?

Thanks,

Le jeudi 12 février 2015 22:44:29 UTC+1, Mark Walkom a écrit :

How do you guarantee a level of service provided any other way?
Redundancy and smart planning and design.

It's no different with ES.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/5ca10ab5-8e25-4072-828f-7a4fafb3afdf%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/5ca10ab5-8e25-4072-828f-7a4fafb3afdf%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/FGKUmzn-WSs/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAEYi1X8jCH8jhtBXkc35G72aeK7etLmwHUbnjwr3nnL5p14H%2BQ%40mail.gmail.com
https://groups.google.com/d/msgid/elasticsearch/CAEYi1X8jCH8jhtBXkc35G72aeK7etLmwHUbnjwr3nnL5p14H%2BQ%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAHTnFwWhNnZGpVBmAA_VqNXgx1Z3XrwkP51XpTmEsK9Y4XFHdw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

The OS will use the cache as much as it can and will use it irrespective of
whether the whole index will fit or not, I just tried to read between the
lines a little and must have misunderstood :slight_smile:

On 17 February 2015 at 00:29, Victor Rondel rondelvictor@gmail.com wrote:

  • "Test :)"

Ok!

  • "unless you have enough memory for the OS to cache the whole thing
    in memory then it'll never happen"

What exactly would never happen? Like, if the two shards combined are too
big for the OS cache, Lucene wouldn't use the cache at all?

  • "But do you really want to do this?"

Well, I have three machines, three applications with one index each, and
they all want two replicas...

2015-02-16 9:06 GMT+01:00 Mark Walkom markwalkom@gmail.com:

Test :slight_smile:

But the reality is, that unless you have enough memory for the OS to
cache the whole thing in memory then it'll never happen. But do you
really want to do this?

On 14 February 2015 at 00:11, rondelvictor@gmail.com wrote:

Regarding shards RAM allocation :

  • Since each shard comes with a Lucene instance :
  • If I have 2 shards, each belonging to a different index, each
    index being used by a different application.
    Given that each shard's Lucene highly uses OS cache, how can I
    certify that each Lucene will have enough OS cache for its magic to perform?

Thanks,

Le jeudi 12 février 2015 22:44:29 UTC+1, Mark Walkom a écrit :

How do you guarantee a level of service provided any other way?
Redundancy and smart planning and design.

It's no different with ES.

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/5ca10ab5-8e25-4072-828f-7a4fafb3afdf%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/5ca10ab5-8e25-4072-828f-7a4fafb3afdf%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/FGKUmzn-WSs/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAEYi1X8jCH8jhtBXkc35G72aeK7etLmwHUbnjwr3nnL5p14H%2BQ%40mail.gmail.com
https://groups.google.com/d/msgid/elasticsearch/CAEYi1X8jCH8jhtBXkc35G72aeK7etLmwHUbnjwr3nnL5p14H%2BQ%40mail.gmail.com?utm_medium=email&utm_source=footer
.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAHTnFwWhNnZGpVBmAA_VqNXgx1Z3XrwkP51XpTmEsK9Y4XFHdw%40mail.gmail.com
https://groups.google.com/d/msgid/elasticsearch/CAHTnFwWhNnZGpVBmAA_VqNXgx1Z3XrwkP51XpTmEsK9Y4XFHdw%40mail.gmail.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAEYi1X82%3DoeSww7ewwEDfGkM4vM4Wngdy1qT0V-gkY7v7E3XxQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.