Is the "indices.memory.index_buffer_size" configuration a cluster wide
configuration or per node configuration? Do I need to set it on every node?
Or just the master (eligible) node?
Is the "indices.memory.index_buffer_size" configuration a cluster wide
configuration or per node configuration? Do I need to set it on every node?
Or just the master (eligible) node?
What confuses me are "global setting" (which suggests cluster-wide setting)
and "on a specific node" (which suggests node level setting). I could just
try it out, but it's hard to tell if the setting worked or not.
On Sunday, August 24, 2014 3:13:17 PM UTC-7, Mark Walkom wrote:
On 25 August 2014 03:12, Yongtao You <yongt...@gmail.com <javascript:>>
wrote:
Hi,
Is the "indices.memory.index_buffer_size" configuration a cluster wide
configuration or per node configuration? Do I need to set it on every node?
Or just the master (eligible) node?
Its a setting that you set globally at the cluster level. It takes effect
per node. What that means is that for every "active" shard on each the
node gets an equal share of that much space. "Active" means has been
written to in the past six minutes or so. When a node first starts all
shards are active assumed active and those that are not updated at all lose
active status after the timeout. You can watch the little dance it does by
setting
index.engine.internal: DEBUG
in logging.yml.
What confuses me are "global setting" (which suggests cluster-wide
setting) and "on a specific node" (which suggests node level setting). I
could just try it out, but it's hard to tell if the setting worked or not.
On Sunday, August 24, 2014 3:13:17 PM UTC-7, Mark Walkom wrote:
Is the "indices.memory.index_buffer_size" configuration a cluster wide
configuration or per node configuration? Do I need to set it on every node?
Or just the master (eligible) node?
Thanks.
Yongtao
--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearc...@googlegroups.com.
On Tue, Aug 26, 2014 at 2:15 PM, Nikolas Everett nik9000@gmail.com wrote:
I just looked at this code!
Its a setting that you set globally at the cluster level. It takes effect
per node. What that means is that for every "active" shard on each the
node gets an equal share of that much space. "Active" means has been
written to in the past six minutes or so. When a node first starts all
shards are active assumed active and those that are not updated at all lose
active status after the timeout. You can watch the little dance it does by
setting
index.engine.internal: DEBUG
in logging.yml.
What confuses me are "global setting" (which suggests cluster-wide
setting) and "on a specific node" (which suggests node level setting). I
could just try it out, but it's hard to tell if the setting worked or not.
On Sunday, August 24, 2014 3:13:17 PM UTC-7, Mark Walkom wrote:
Is the "indices.memory.index_buffer_size" configuration a cluster wide
configuration or per node configuration? Do I need to set it on every node?
Or just the master (eligible) node?
Thanks.
Yongtao
--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearc...@googlegroups.com.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.