I'm getting a lot of these errors in my Elasticsearch logs, and am also
experiencing a lot of slowness on the cluster...
New used memory 7670582710 [7.1gb] from field [machineName.raw] would be
larger than configured breaker: 7666532352 [7.1gb], breaking
...
New used memory 7674188379 [7.1gb] from field [@timestamp] would be larger
than configured breaker: 7666532352 [7.1gb], breaking
I've looked at the documentation about memory limits
http://www.elasticsearch.org/guide/en/elasticsearch/guide/current/_limiting_memory_usage.html,
but I don't really understand what is causing this, and more importantly
how to avoid this...
My cluster is 10 machines @ 32GB memory and 8 CPU cores each. I have one
ES node on each machine with 12GB memory allocated. On each machine there
is additionally one logstash agent (1GB) and one redis server (2GB).
I have 10 indexes open with one replication per shard (so each node should
only be holding 22 shards (two more for kibana-int)).
I'm using Elasticsearch 1.3.3, Logstash 1.4.2
Thanks for your help!
-Robin-
--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/0f553ebc-12a5-402c-82cb-9751fde111eb%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
I'm still having this problem... has anybody got an idea what the cause /
solution might be?
Thank you!
On Tuesday, 7 October 2014 14:29:22 UTC+2, Robin Clarke wrote:
I'm getting a lot of these errors in my Elasticsearch logs, and am also
experiencing a lot of slowness on the cluster...
New used memory 7670582710 [7.1gb] from field [machineName.raw] would be
larger than configured breaker: 7666532352 [7.1gb], breaking
...
New used memory 7674188379 [7.1gb] from field [@timestamp] would be larger
than configured breaker: 7666532352 [7.1gb], breaking
I've looked at the documentation about memory limits
http://www.elasticsearch.org/guide/en/elasticsearch/guide/current/_limiting_memory_usage.html,
but I don't really understand what is causing this, and more importantly
how to avoid this...
My cluster is 10 machines @ 32GB memory and 8 CPU cores each. I have one
ES node on each machine with 12GB memory allocated. On each machine there
is additionally one logstash agent (1GB) and one redis server (2GB).
I have 10 indexes open with one replication per shard (so each node should
only be holding 22 shards (two more for kibana-int)).
I'm using Elasticsearch 1.3.3, Logstash 1.4.2
Thanks for your help!
-Robin-
--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/5935b1f4-809c-46ac-ba03-f1df33a8737e%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
This is caused by elasticsearch trying to load fielddata. Fielddata is used
for sorting and faceting/aggregations. When a query has a sort parameter
the node will try to load the fielddata for that field for all documents in
the shard, not just those included in the query result. The breaker is
tripped when ES estimates there is not enough heap available to load the
fielddata so it just rejects the query rather than running the node out of
heap space.
You should probably start by looking at the queries that are being run to
determine what's triggering the error. To deal with it the options I'm
aware of are to add heap space, more nodes or look at using doc_values to
move fielddata off the heap.
Kimbro
On Wed, Oct 15, 2014 at 10:42 PM, Robin Clarke robin13@gmail.com wrote:
I'm still having this problem... has anybody got an idea what the cause /
solution might be?
Thank you!
On Tuesday, 7 October 2014 14:29:22 UTC+2, Robin Clarke wrote:
I'm getting a lot of these errors in my Elasticsearch logs, and am also
experiencing a lot of slowness on the cluster...
New used memory 7670582710 [7.1gb] from field [machineName.raw] would be
larger than configured breaker: 7666532352 [7.1gb], breaking
...
New used memory 7674188379 [7.1gb] from field [@timestamp] would be
larger than configured breaker: 7666532352 [7.1gb], breaking
I've looked at the documentation about memory limits
http://www.elasticsearch.org/guide/en/elasticsearch/guide/current/_limiting_memory_usage.html,
but I don't really understand what is causing this, and more importantly
how to avoid this...
My cluster is 10 machines @ 32GB memory and 8 CPU cores each. I have one
ES node on each machine with 12GB memory allocated. On each machine there
is additionally one logstash agent (1GB) and one redis server (2GB).
I have 10 indexes open with one replication per shard (so each node
should only be holding 22 shards (two more for kibana-int)).
I'm using Elasticsearch 1.3.3, Logstash 1.4.2
Thanks for your help!
-Robin-
--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/5935b1f4-809c-46ac-ba03-f1df33a8737e%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/5935b1f4-809c-46ac-ba03-f1df33a8737e%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAA0DmXZRMFsAMXCs9qmMk0KN%2B%2BuLh%3DCiEtP-r4vK3tZF0CRAmA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.