Fielddata breaker question

Hi,

I'd like to know how the fielddata breaker settings are used in a cluster.
We had a single index in production without any issue but when we added
some new indices, we started to have issues on the old index and some other
indices related to fielddata breaker settings :

SearchPhaseExecutionException[Failed to execute phase [query], all shards
failed; shardFailures {[EQ-GzbelTkqfguEfbElNLA][log-2014-03][0]:
ElasticsearchException[org.elasticsearch.common.breaker.CircuitBreakingException:
Data too large, data would be larger than limit of [845047398] bytes];
nested:
UncheckedExecutionException[org.elasticsearch.common.breaker.CircuitBreakingException:
Data too large, data would be larger than limit of [845047398] bytes];
nested: CircuitBreakingException[Data too large, data would be larger than
limit of [845047398] bytes]; }{[EQ-GzbelTkqfguEfbElNLA][log-2014-03][1]:
ElasticsearchException[org.elasticsearch.common.breaker.CircuitBreakingException:
Data too large, data would be larger than limit of [845047398] bytes];
nested:
UncheckedExecutionException[org.elasticsearch.common.breaker.CircuitBreakingException:
Data too large, data would be larger than limit of [845047398] bytes];
nested: CircuitBreakingException[Data too large, data would be larger than
limit of [845047398] bytes]; }]

So, I wonder if it's logical that the old index is affected by the addition
of some other indices.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/1d028452-08bc-4e4b-be53-52403537aa9a%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

The settings for the fielddata breaker are global. Any index can be
affected by it, new indices and already existing indices.

For example when new data is added to an existing index, new field data
entries may be created and if there isn't sufficient memory because of
other field data entries for newly created indices are taking memory and
then the error you mentioned can occur.

On 3 March 2014 12:51, Dunaeth lomig.poyet@gmail.com wrote:

Hi,

I'd like to know how the fielddata breaker settings are used in a cluster.
We had a single index in production without any issue but when we added
some new indices, we started to have issues on the old index and some other
indices related to fielddata breaker settings :

SearchPhaseExecutionException[Failed to execute phase [query], all shards
failed; shardFailures {[EQ-GzbelTkqfguEfbElNLA][log-2014-03][0]:
ElasticsearchException[org.elasticsearch.common.breaker.CircuitBreakingException:
Data too large, data would be larger than limit of [845047398] bytes];
nested:
UncheckedExecutionException[org.elasticsearch.common.breaker.CircuitBreakingException:
Data too large, data would be larger than limit of [845047398] bytes];
nested: CircuitBreakingException[Data too large, data would be larger than
limit of [845047398] bytes]; }{[EQ-GzbelTkqfguEfbElNLA][log-2014-03][1]:
ElasticsearchException[org.elasticsearch.common.breaker.CircuitBreakingException:
Data too large, data would be larger than limit of [845047398] bytes];
nested:
UncheckedExecutionException[org.elasticsearch.common.breaker.CircuitBreakingException:
Data too large, data would be larger than limit of [845047398] bytes];
nested: CircuitBreakingException[Data too large, data would be larger than
limit of [845047398] bytes]; }]

So, I wonder if it's logical that the old index is affected by the
addition of some other indices.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/1d028452-08bc-4e4b-be53-52403537aa9a%40googlegroups.com
.
For more options, visit https://groups.google.com/groups/opt_out.

--
Met vriendelijke groet,

Martijn van Groningen

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CA%2BA76Tx3rBFRmGFYf3E49tzrzNaRp94poZTZ46gxw%2BiTZa7S_Q%40mail.gmail.com.
For more options, visit https://groups.google.com/groups/opt_out.

Isn't it a bit weird that we reached a 800MB limit and shortcircuited the data processing when our whole indices size is only 140MB (half this size actually since it includes a backup node) ?

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/04586003-5f86-4749-a04e-e5ee14a795e9%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

At the moment, we have a whole index size of less than 100MB (less than
200MB with backuped data) and the estimated_size is 1.4GB... How are we
supposed to deal we that kind of trouble ?

Le mardi 4 mars 2014 06:50:56 UTC+1, Dunaeth a écrit :

Isn't it a bit weird that we reached a 800MB limit and shortcircuited the
data processing when our whole indices size is only 140MB (half this size
actually since it includes a backup node) ?

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/85c1f247-5265-4f5f-98e1-34d7d165ba56%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Hi,

In order to be more precise, here are the graphs of the metrics we monitor
since we've had the fielddata breaker issue :

https://lh4.googleusercontent.com/-esvzNzxQefM/Ux1ziG-KMoI/AAAAAAAAAGs/xH05cbTpIz0/s1600/elasticsearch_breaker-week.png

https://lh4.googleusercontent.com/-lJ1ib0PGrj8/Ux1zlcrVUcI/AAAAAAAAAG0/HvQpdTC-z_k/s1600/elasticsearch_docs-week.png

https://lh3.googleusercontent.com/-M_q2MnRllxE/Ux1zniFzMzI/AAAAAAAAAG8/QQk7PnPc6-c/s1600/elasticsearch_index_size-week.png

As one can see, the indices grow kind of linearly with a size which remains
relatively small when the fielddata breaker estimated size grows
exponentially.

Le jeudi 6 mars 2014 14:49:04 UTC+1, Dunaeth a écrit :

At the moment, we have a whole index size of less than 100MB (less than
200MB with backuped data) and the estimated_size is 1.4GB... How are we
supposed to deal we that kind of trouble ?

Le mardi 4 mars 2014 06:50:56 UTC+1, Dunaeth a écrit :

Isn't it a bit weird that we reached a 800MB limit and shortcircuited the
data processing when our whole indices size is only 140MB (half this size
actually since it includes a backup node) ?

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/73526a94-b16b-48b6-9a27-526b194b145f%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Yes, the breaker indices size does grow quickly. Can you share the same
graphs for jvm heap used and field data size?

On 10 March 2014 15:16, Dunaeth lomig.poyet@gmail.com wrote:

Hi,

In order to be more precise, here are the graphs of the metrics we monitor
since we've had the fielddata breaker issue :

https://lh4.googleusercontent.com/-esvzNzxQefM/Ux1ziG-KMoI/AAAAAAAAAGs/xH05cbTpIz0/s1600/elasticsearch_breaker-week.png

https://lh4.googleusercontent.com/-lJ1ib0PGrj8/Ux1zlcrVUcI/AAAAAAAAAG0/HvQpdTC-z_k/s1600/elasticsearch_docs-week.png

https://lh3.googleusercontent.com/-M_q2MnRllxE/Ux1zniFzMzI/AAAAAAAAAG8/QQk7PnPc6-c/s1600/elasticsearch_index_size-week.png

As one can see, the indices grow kind of linearly with a size which
remains relatively small when the fielddata breaker estimated size grows
exponentially.

Le jeudi 6 mars 2014 14:49:04 UTC+1, Dunaeth a écrit :

At the moment, we have a whole index size of less than 100MB (less than
200MB with backuped data) and the estimated_size is 1.4GB... How are we
supposed to deal we that kind of trouble ?

Le mardi 4 mars 2014 06:50:56 UTC+1, Dunaeth a écrit :

Isn't it a bit weird that we reached a 800MB limit and shortcircuited
the data processing when our whole indices size is only 140MB (half this
size actually since it includes a backup node) ?

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/73526a94-b16b-48b6-9a27-526b194b145f%40googlegroups.comhttps://groups.google.com/d/msgid/elasticsearch/73526a94-b16b-48b6-9a27-526b194b145f%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
Met vriendelijke groet,

Martijn van Groningen

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CA%2BA76TzgRF1Fo9dshF33WmyoJ5Cu6hO_rg4BxeUuwXARM3Dagg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

I'm asking our hoster to monitor these metrics and to avoid any confusion,
the breaker indice size actually monitor the
fielddata_breaker.estimated_size_in_bytes from the /_nodes/stats endpoint.
Thanks for following this thread :slight_smile:

Le lundi 10 mars 2014 09:34:15 UTC+1, Martijn v Groningen a écrit :

Yes, the breaker indices size does grow quickly. Can you share the same
graphs for jvm heap used and field data size?

On 10 March 2014 15:16, Dunaeth <lomig...@gmail.com <javascript:>> wrote:

Hi,

In order to be more precise, here are the graphs of the metrics we
monitor since we've had the fielddata breaker issue :

https://lh4.googleusercontent.com/-esvzNzxQefM/Ux1ziG-KMoI/AAAAAAAAAGs/xH05cbTpIz0/s1600/elasticsearch_breaker-week.png

https://lh4.googleusercontent.com/-lJ1ib0PGrj8/Ux1zlcrVUcI/AAAAAAAAAG0/HvQpdTC-z_k/s1600/elasticsearch_docs-week.png

https://lh3.googleusercontent.com/-M_q2MnRllxE/Ux1zniFzMzI/AAAAAAAAAG8/QQk7PnPc6-c/s1600/elasticsearch_index_size-week.png

As one can see, the indices grow kind of linearly with a size which
remains relatively small when the fielddata breaker estimated size grows
exponentially.

Le jeudi 6 mars 2014 14:49:04 UTC+1, Dunaeth a écrit :

At the moment, we have a whole index size of less than 100MB (less than
200MB with backuped data) and the estimated_size is 1.4GB... How are we
supposed to deal we that kind of trouble ?

Le mardi 4 mars 2014 06:50:56 UTC+1, Dunaeth a écrit :

Isn't it a bit weird that we reached a 800MB limit and shortcircuited
the data processing when our whole indices size is only 140MB (half this
size actually since it includes a backup node) ?

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearc...@googlegroups.com <javascript:>.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/73526a94-b16b-48b6-9a27-526b194b145f%40googlegroups.comhttps://groups.google.com/d/msgid/elasticsearch/73526a94-b16b-48b6-9a27-526b194b145f%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
Met vriendelijke groet,

Martijn van Groningen

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/4e171de7-8bd7-4cb4-8a83-989d7dddee21%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

From now, I'd say the field data size is quite flat whereas the jvm heap
used grows as fielddata_breaker.estimated_size_in_bytes grows. I'll post
graphs when they'll be relevant.
If it's a jvm heap used issue, could it be due to some kind of caching
issue (though the filter_cache seems small on each shard) ?

Le lundi 10 mars 2014 09:43:56 UTC+1, Dunaeth a écrit :

I'm asking our hoster to monitor these metrics and to avoid any confusion,
the breaker indice size actually monitor the
fielddata_breaker.estimated_size_in_bytes from the /_nodes/stats endpoint.
Thanks for following this thread :slight_smile:

Le lundi 10 mars 2014 09:34:15 UTC+1, Martijn v Groningen a écrit :

Yes, the breaker indices size does grow quickly. Can you share the same
graphs for jvm heap used and field data size?

On 10 March 2014 15:16, Dunaeth lomig...@gmail.com wrote:

Hi,

In order to be more precise, here are the graphs of the metrics we
monitor since we've had the fielddata breaker issue :

https://lh4.googleusercontent.com/-esvzNzxQefM/Ux1ziG-KMoI/AAAAAAAAAGs/xH05cbTpIz0/s1600/elasticsearch_breaker-week.png

https://lh4.googleusercontent.com/-lJ1ib0PGrj8/Ux1zlcrVUcI/AAAAAAAAAG0/HvQpdTC-z_k/s1600/elasticsearch_docs-week.png

https://lh3.googleusercontent.com/-M_q2MnRllxE/Ux1zniFzMzI/AAAAAAAAAG8/QQk7PnPc6-c/s1600/elasticsearch_index_size-week.png

As one can see, the indices grow kind of linearly with a size which
remains relatively small when the fielddata breaker estimated size grows
exponentially.

Le jeudi 6 mars 2014 14:49:04 UTC+1, Dunaeth a écrit :

At the moment, we have a whole index size of less than 100MB (less than
200MB with backuped data) and the estimated_size is 1.4GB... How are we
supposed to deal we that kind of trouble ?

Le mardi 4 mars 2014 06:50:56 UTC+1, Dunaeth a écrit :

Isn't it a bit weird that we reached a 800MB limit and shortcircuited
the data processing when our whole indices size is only 140MB (half this
size actually since it includes a backup node) ?

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearc...@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/73526a94-b16b-48b6-9a27-526b194b145f%40googlegroups.comhttps://groups.google.com/d/msgid/elasticsearch/73526a94-b16b-48b6-9a27-526b194b145f%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
Met vriendelijke groet,

Martijn van Groningen

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/8366c5bd-d9e3-49be-9ceb-12190625af1d%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Here are the requested metric graphs on our two nodes :

https://lh5.googleusercontent.com/-kdSq9BgxqGs/Ux7Gz28b_aI/AAAAAAAAAHM/cggi3HIp5us/s1600/elasticsearch_breaker-day.png

https://lh6.googleusercontent.com/-k4Of7rvaP0I/Ux7G4BAvKeI/AAAAAAAAAHU/zUVdNuonv0A/s1600/elasticsearch2_breaker-day.png

Things that could help to identify the issue is that we didn't had any
trouble with only one index and that it started when we had a statistic
data structure using logstash with month indices. We use these indices to
store data and perform many percolate queries on a specific tester index
(no data, just percolation queries).

Le lundi 10 mars 2014 17:24:27 UTC+1, Dunaeth a écrit :

From now, I'd say the field data size is quite flat whereas the jvm heap
used grows as fielddata_breaker.estimated_size_in_bytes grows. I'll post
graphs when they'll be relevant.
If it's a jvm heap used issue, could it be due to some kind of caching
issue (though the filter_cache seems small on each shard) ?

Le lundi 10 mars 2014 09:43:56 UTC+1, Dunaeth a écrit :

I'm asking our hoster to monitor these metrics and to avoid any
confusion, the breaker indice size actually monitor the
fielddata_breaker.estimated_size_in_bytes from the /_nodes/stats endpoint.
Thanks for following this thread :slight_smile:

Le lundi 10 mars 2014 09:34:15 UTC+1, Martijn v Groningen a écrit :

Yes, the breaker indices size does grow quickly. Can you share the same
graphs for jvm heap used and field data size?

On 10 March 2014 15:16, Dunaeth lomig...@gmail.com wrote:

Hi,

In order to be more precise, here are the graphs of the metrics we
monitor since we've had the fielddata breaker issue :

https://lh4.googleusercontent.com/-esvzNzxQefM/Ux1ziG-KMoI/AAAAAAAAAGs/xH05cbTpIz0/s1600/elasticsearch_breaker-week.png

https://lh4.googleusercontent.com/-lJ1ib0PGrj8/Ux1zlcrVUcI/AAAAAAAAAG0/HvQpdTC-z_k/s1600/elasticsearch_docs-week.png

https://lh3.googleusercontent.com/-M_q2MnRllxE/Ux1zniFzMzI/AAAAAAAAAG8/QQk7PnPc6-c/s1600/elasticsearch_index_size-week.png

As one can see, the indices grow kind of linearly with a size which
remains relatively small when the fielddata breaker estimated size grows
exponentially.

Le jeudi 6 mars 2014 14:49:04 UTC+1, Dunaeth a écrit :

At the moment, we have a whole index size of less than 100MB (less
than 200MB with backuped data) and the estimated_size is 1.4GB... How are
we supposed to deal we that kind of trouble ?

Le mardi 4 mars 2014 06:50:56 UTC+1, Dunaeth a écrit :

Isn't it a bit weird that we reached a 800MB limit and shortcircuited
the data processing when our whole indices size is only 140MB (half this
size actually since it includes a backup node) ?

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearc...@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/73526a94-b16b-48b6-9a27-526b194b145f%40googlegroups.comhttps://groups.google.com/d/msgid/elasticsearch/73526a94-b16b-48b6-9a27-526b194b145f%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
Met vriendelijke groet,

Martijn van Groningen

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/c640ff57-1e31-4ad6-823b-ccf2749d2d93%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

The percolator can take up a big part of your jvm heap. How much percolator
queries are loaded (can be seen via cluster stats api)?
It is weird that the fielddata size stays line stays at the bottom, via
which api do you fetch the field data size stat?

On 11 March 2014 15:21, Dunaeth lomig.poyet@gmail.com wrote:

Here are the requested metric graphs on our two nodes :

https://lh5.googleusercontent.com/-kdSq9BgxqGs/Ux7Gz28b_aI/AAAAAAAAAHM/cggi3HIp5us/s1600/elasticsearch_breaker-day.png

https://lh6.googleusercontent.com/-k4Of7rvaP0I/Ux7G4BAvKeI/AAAAAAAAAHU/zUVdNuonv0A/s1600/elasticsearch2_breaker-day.png

Things that could help to identify the issue is that we didn't had any
trouble with only one index and that it started when we had a statistic
data structure using logstash with month indices. We use these indices to
store data and perform many percolate queries on a specific tester index
(no data, just percolation queries).

Le lundi 10 mars 2014 17:24:27 UTC+1, Dunaeth a écrit :

From now, I'd say the field data size is quite flat whereas the jvm heap
used grows as fielddata_breaker.estimated_size_in_bytes grows. I'll post
graphs when they'll be relevant.
If it's a jvm heap used issue, could it be due to some kind of caching
issue (though the filter_cache seems small on each shard) ?

Le lundi 10 mars 2014 09:43:56 UTC+1, Dunaeth a écrit :

I'm asking our hoster to monitor these metrics and to avoid any
confusion, the breaker indice size actually monitor the
fielddata_breaker.estimated_size_in_bytes from the /_nodes/stats
endpoint. Thanks for following this thread :slight_smile:

Le lundi 10 mars 2014 09:34:15 UTC+1, Martijn v Groningen a écrit :

Yes, the breaker indices size does grow quickly. Can you share the same
graphs for jvm heap used and field data size?

On 10 March 2014 15:16, Dunaeth lomig...@gmail.com wrote:

Hi,

In order to be more precise, here are the graphs of the metrics we
monitor since we've had the fielddata breaker issue :

https://lh4.googleusercontent.com/-esvzNzxQefM/Ux1ziG-KMoI/AAAAAAAAAGs/xH05cbTpIz0/s1600/elasticsearch_breaker-week.png

https://lh4.googleusercontent.com/-lJ1ib0PGrj8/Ux1zlcrVUcI/AAAAAAAAAG0/HvQpdTC-z_k/s1600/elasticsearch_docs-week.png

https://lh3.googleusercontent.com/-M_q2MnRllxE/Ux1zniFzMzI/AAAAAAAAAG8/QQk7PnPc6-c/s1600/elasticsearch_index_size-week.png

As one can see, the indices grow kind of linearly with a size which
remains relatively small when the fielddata breaker estimated size grows
exponentially.

Le jeudi 6 mars 2014 14:49:04 UTC+1, Dunaeth a écrit :

At the moment, we have a whole index size of less than 100MB (less
than 200MB with backuped data) and the estimated_size is 1.4GB... How are
we supposed to deal we that kind of trouble ?

Le mardi 4 mars 2014 06:50:56 UTC+1, Dunaeth a écrit :

Isn't it a bit weird that we reached a 800MB limit and
shortcircuited the data processing when our whole indices size is only
140MB (half this size actually since it includes a backup node) ?

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearc...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/
msgid/elasticsearch/73526a94-b16b-48b6-9a27-526b194b145f%
40googlegroups.comhttps://groups.google.com/d/msgid/elasticsearch/73526a94-b16b-48b6-9a27-526b194b145f%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
Met vriendelijke groet,

Martijn van Groningen

--
Met vriendelijke groet,

Martijn van Groningen

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CA%2BA76TxdS48kEpcp_PfQrxC-Zojrga8mVmApH3XZfknbzf-1Sw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

We have 5 percolator queries registered in our tester index and we
percolate each stat document before insertion in time based indices.
The fielddata size comes from
nodes.node.indices.fielddata.memory_size_in_bytes field from
/_nodes/stats endpoint.

Le mardi 11 mars 2014 11:32:42 UTC+1, Martijn v Groningen a écrit :

The percolator can take up a big part of your jvm heap. How much
percolator queries are loaded (can be seen via cluster stats api)?
It is weird that the fielddata size stays line stays at the bottom, via
which api do you fetch the field data size stat?

On 11 March 2014 15:21, Dunaeth <lomig...@gmail.com <javascript:>> wrote:

Here are the requested metric graphs on our two nodes :

https://lh5.googleusercontent.com/-kdSq9BgxqGs/Ux7Gz28b_aI/AAAAAAAAAHM/cggi3HIp5us/s1600/elasticsearch_breaker-day.png

https://lh6.googleusercontent.com/-k4Of7rvaP0I/Ux7G4BAvKeI/AAAAAAAAAHU/zUVdNuonv0A/s1600/elasticsearch2_breaker-day.png

Things that could help to identify the issue is that we didn't had any
trouble with only one index and that it started when we had a statistic
data structure using logstash with month indices. We use these indices to
store data and perform many percolate queries on a specific tester index
(no data, just percolation queries).

Le lundi 10 mars 2014 17:24:27 UTC+1, Dunaeth a écrit :

From now, I'd say the field data size is quite flat whereas the jvm heap
used grows as fielddata_breaker.estimated_size_in_bytes grows. I'll
post graphs when they'll be relevant.
If it's a jvm heap used issue, could it be due to some kind of caching
issue (though the filter_cache seems small on each shard) ?

Le lundi 10 mars 2014 09:43:56 UTC+1, Dunaeth a écrit :

I'm asking our hoster to monitor these metrics and to avoid any
confusion, the breaker indice size actually monitor the
fielddata_breaker.estimated_size_in_bytes from the /_nodes/stats
endpoint. Thanks for following this thread :slight_smile:

Le lundi 10 mars 2014 09:34:15 UTC+1, Martijn v Groningen a écrit :

Yes, the breaker indices size does grow quickly. Can you share the
same graphs for jvm heap used and field data size?

On 10 March 2014 15:16, Dunaeth lomig...@gmail.com wrote:

Hi,

In order to be more precise, here are the graphs of the metrics we
monitor since we've had the fielddata breaker issue :

https://lh4.googleusercontent.com/-esvzNzxQefM/Ux1ziG-KMoI/AAAAAAAAAGs/xH05cbTpIz0/s1600/elasticsearch_breaker-week.png

https://lh4.googleusercontent.com/-lJ1ib0PGrj8/Ux1zlcrVUcI/AAAAAAAAAG0/HvQpdTC-z_k/s1600/elasticsearch_docs-week.png

https://lh3.googleusercontent.com/-M_q2MnRllxE/Ux1zniFzMzI/AAAAAAAAAG8/QQk7PnPc6-c/s1600/elasticsearch_index_size-week.png

As one can see, the indices grow kind of linearly with a size which
remains relatively small when the fielddata breaker estimated size grows
exponentially.

Le jeudi 6 mars 2014 14:49:04 UTC+1, Dunaeth a écrit :

At the moment, we have a whole index size of less than 100MB (less
than 200MB with backuped data) and the estimated_size is 1.4GB... How are
we supposed to deal we that kind of trouble ?

Le mardi 4 mars 2014 06:50:56 UTC+1, Dunaeth a écrit :

Isn't it a bit weird that we reached a 800MB limit and
shortcircuited the data processing when our whole indices size is only
140MB (half this size actually since it includes a backup node) ?

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it,
send an email to elasticsearc...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/
msgid/elasticsearch/73526a94-b16b-48b6-9a27-526b194b145f%
40googlegroups.comhttps://groups.google.com/d/msgid/elasticsearch/73526a94-b16b-48b6-9a27-526b194b145f%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
Met vriendelijke groet,

Martijn van Groningen

--
Met vriendelijke groet,

Martijn van Groningen

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/9a305194-e035-48f8-abd0-7709270bd397%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Hi Dunaeth,

Can you attach the output of curl 'localhost:9200/_nodes/stats?all', I
would like to compare the current breaker estimation and the actual field
data usage (they should be the same if no field data loading is currently
happening).

;; Lee

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/dd54da43-9b6d-4842-92fb-3a05734314d6%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

5 percolator queries isn't a big deal. I was thinking order magnitude of
100k percolator queries and up.

On 11 March 2014 18:02, Lee Hinman matthew.hinman@gmail.com wrote:

Hi Dunaeth,

Can you attach the output of curl 'localhost:9200/_nodes/stats?all', I
would like to compare the current breaker estimation and the actual field
data usage (they should be the same if no field data loading is currently
happening).

;; Lee

--
Met vriendelijke groet,

Martijn van Groningen

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CA%2BA76TwAk7dBiCDncJcW0c74dTkY7sPjNNcxh4o_sX%2B%2B%3DyXV%2BA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Here it is :

{
"cluster_name": "cluster",
"nodes": {
"G9QD__pjSX6EEAgf0-R6DA": {
"timestamp": 1394541121431,
"name": "Rachel Grey",
"transport_address": "inet[/10.16.75.4:9300]",
"host": "esnode2",
"ip": [
"inet[/10.16.75.4:9300]",
"NONE"
],
"indices": {
"docs": {
"count": 267760,
"deleted": 27
},
"store": {
"size_in_bytes": 117169306,
"throttle_time_in_millis": 529910
},
"indexing": {
"index_total": 72967,
"index_time_in_millis": 680811,
"index_current": 0,
"delete_total": 3361,
"delete_time_in_millis": 350,
"delete_current": 0
},
"get": {
"total": 0,
"time_in_millis": 0,
"exists_total": 0,
"exists_time_in_millis": 0,
"missing_total": 0,
"missing_time_in_millis": 0,
"current": 0
},
"search": {
"open_contexts": 0,
"query_total": 1867521,
"query_time_in_millis": 24867921,
"query_current": 0,
"fetch_total": 513026,
"fetch_time_in_millis": 448526,
"fetch_current": 0
},
"merges": {
"current": 0,
"current_docs": 0,
"current_size_in_bytes": 0,
"total": 172,
"total_time_in_millis": 297808,
"total_docs": 1616101,
"total_size_in_bytes": 480732242
},
"refresh": {
"total": 11450,
"total_time_in_millis": 559302
},
"flush": {
"total": 217,
"total_time_in_millis": 4368
},
"warmer": {
"current": 0,
"total": 12265,
"total_time_in_millis": 28495
},
"filter_cache": {
"memory_size_in_bytes": 2501878,
"evictions": 0
},
"id_cache": {
"memory_size_in_bytes": 0
},
"fielddata": {
"memory_size_in_bytes": 2048898,
"evictions": 0
},
"percolate": {
"total": 15659,
"time_in_millis": 33899,
"current": 0,
"memory_size_in_bytes": 50602805,
"memory_size": "48.2mb",
"queries": 5
},
"completion": {
"size_in_bytes": 0
},
"segments": {
"count": 46,
"memory_in_bytes": 3048144
},
"translog": {
"operations": 289,
"size_in_bytes": 0
}
},
"os": {
"timestamp": 1394541121435,
"uptime_in_millis": 19092151,
"load_average": [
1.1,
0.67,
0.61
],
"cpu": {
"sys": 0,
"user": 23,
"idle": 76,
"usage": 23,
"stolen": 0
},
"mem": {
"free_in_bytes": 5992665088,
"used_in_bytes": 10874990592,
"free_percent": 71,
"used_percent": 28,
"actual_free_in_bytes": 12106989568,
"actual_used_in_bytes": 4760666112
},
"swap": {
"used_in_bytes": 0,
"free_in_bytes": 0
}
},
"process": {
"timestamp": 1394541121435,
"open_file_descriptors": 160,
"cpu": {
"percent": 46,
"sys_in_millis": 323580,
"user_in_millis": 27690580,
"total_in_millis": 28014160
},
"mem": {
"resident_in_bytes": 1591611392,
"share_in_bytes": 34516992,
"total_virtual_in_bytes": 2852478976
}
},
"jvm": {
"timestamp": 1394541121435,
"uptime_in_millis": 89061753,
"mem": {
"heap_used_in_bytes": 1237602520,
"heap_used_percent": 58,
"heap_committed_in_bytes": 2130051072,
"heap_max_in_bytes": 2130051072,
"non_heap_used_in_bytes": 54024912,
"non_heap_committed_in_bytes": 54394880,
"pools": {
"young": {
"used_in_bytes": 8136992,
"max_in_bytes": 139591680,
"peak_used_in_bytes": 139591680,
"peak_max_in_bytes": 139591680
},
"survivor": {
"used_in_bytes": 1696352,
"max_in_bytes": 17432576,
"peak_used_in_bytes": 17432576,
"peak_max_in_bytes": 17432576
},
"old": {
"used_in_bytes": 1227771384,
"max_in_bytes": 1973026816,
"peak_used_in_bytes": 1227771384,
"peak_max_in_bytes": 1973026816
}
}
},
"threads": {
"count": 48,
"peak_count": 56
},
"gc": {
"collectors": {
"young": {
"collection_count": 4300,
"collection_time_in_millis": 64401
},
"old": {
"collection_count": 0,
"collection_time_in_millis": 0
}
}
},
"buffer_pools": {
"direct": {
"count": 34,
"used_in_bytes": 7728976,
"total_capacity_in_bytes": 7728976
},
"mapped": {
"count": 96,
"used_in_bytes": 110742186,
"total_capacity_in_bytes": 110742186
}
}
},
"thread_pool": {
"generic": {
"threads": 1,
"queue": 0,
"active": 0,
"rejected": 0,
"largest": 7,
"completed": 11331
},
"index": {
"threads": 2,
"queue": 0,
"active": 0,
"rejected": 0,
"largest": 2,
"completed": 45015
},
"get": {
"threads": 0,
"queue": 0,
"active": 0,
"rejected": 0,
"largest": 0,
"completed": 0
},
"snapshot": {
"threads": 1,
"queue": 0,
"active": 0,
"rejected": 0,
"largest": 1,
"completed": 5408
},
"merge": {
"threads": 1,
"queue": 0,
"active": 0,
"rejected": 0,
"largest": 1,
"completed": 11882
},
"suggest": {
"threads": 0,
"queue": 0,
"active": 0,
"rejected": 0,
"largest": 0,
"completed": 0
},
"bulk": {
"threads": 2,
"queue": 0,
"active": 0,
"rejected": 0,
"largest": 2,
"completed": 5760
},
"optimize": {
"threads": 0,
"queue": 0,
"active": 0,
"rejected": 0,
"largest": 0,
"completed": 0
},
"warmer": {
"threads": 1,
"queue": 0,
"active": 0,
"rejected": 0,
"largest": 1,
"completed": 12265
},
"flush": {
"threads": 1,
"queue": 0,
"active": 0,
"rejected": 0,
"largest": 1,
"completed": 217
},
"search": {
"threads": 6,
"queue": 0,
"active": 0,
"rejected": 0,
"largest": 6,
"completed": 2380681
},
"percolate": {
"threads": 2,
"queue": 0,
"active": 0,
"rejected": 0,
"largest": 2,
"completed": 15659
},
"management": {
"threads": 5,
"queue": 0,
"active": 1,
"rejected": 0,
"largest": 5,
"completed": 11655
},
"refresh": {
"threads": 1,
"queue": 0,
"active": 0,
"rejected": 0,
"largest": 1,
"completed": 11465
}
},
"network": {
"tcp": {
"active_opens": 82783,
"passive_opens": 28208795,
"curr_estab": 54,
"in_segs": 2351809120,
"out_segs": 3873742660,
"retrans_segs": 3506870,
"estab_resets": 103778,
"attempt_fails": 62,
"in_errs": 0,
"out_rsts": 2437332
}
},
"fs": {
"timestamp": 1394541121436,
"total": {
"total_in_bytes": 82566774784,
"free_in_bytes": 76417433600,
"available_in_bytes": 75578605568,
"disk_reads": 71744,
"disk_writes": 43233820,
"disk_io_op": 43305564,
"disk_read_size_in_bytes": 4594340864,
"disk_write_size_in_bytes": 889864847360,
"disk_io_size_in_bytes": 894459188224,
"disk_queue": "0",
"disk_service_time": "0"
},
"data": [
{
"path": "/home/elasticsearch/nodes/0",
"mount": "/",
"dev": "/dev/vda2",
"total_in_bytes": 82566774784,
"free_in_bytes": 76417433600,
"available_in_bytes": 75578605568,
"disk_reads": 71744,
"disk_writes": 43233820,
"disk_io_op": 43305564,
"disk_read_size_in_bytes": 4594340864,
"disk_write_size_in_bytes": 889864847360,
"disk_io_size_in_bytes": 894459188224,
"disk_queue": "0",
"disk_service_time": "0"
}
]
},
"transport": {
"server_open": 26,
"rx_count": 4037863,
"rx_size_in_bytes": 1371158333,
"tx_count": 4036008,
"tx_size_in_bytes": 597805806
},
"http": {
"current_open": 1,
"total_opened": 104527
},
"fielddata_breaker": {
"maximum_size_in_bytes": 1704040857,
"maximum_size": "1.5gb",
"estimated_size_in_bytes": 796577361,
"estimated_size": "759.6mb",
"overhead": 1.03
}
},
"UaZqZ9h5T5ey0dla2KJVSA": {
"timestamp": 1394541121431,
"name": "Martin Preston",
"transport_address": "inet[/10.16.75.3:9300]",
"host": "esnode1",
"ip": [
"inet[/10.16.75.3:9300]",
"NONE"
],
"indices": {
"docs": {
"count": 267760,
"deleted": 24
},
"store": {
"size_in_bytes": 117222009,
"throttle_time_in_millis": 1721395
},
"indexing": {
"index_total": 79162,
"index_time_in_millis": 57805,
"index_current": 0,
"delete_total": 3952,
"delete_time_in_millis": 421,
"delete_current": 0
},
"get": {
"total": 0,
"time_in_millis": 0,
"exists_total": 0,
"exists_time_in_millis": 0,
"missing_total": 0,
"missing_time_in_millis": 0,
"current": 0
},
"search": {
"open_contexts": 0,
"query_total": 2058451,
"query_time_in_millis": 576948,
"query_current": 0,
"fetch_total": 572130,
"fetch_time_in_millis": 94208,
"fetch_current": 0
},
"merges": {
"current": 0,
"current_docs": 0,
"current_size_in_bytes": 0,
"total": 165,
"total_time_in_millis": 60725,
"total_docs": 1542764,
"total_size_in_bytes": 438737516
},
"refresh": {
"total": 12852,
"total_time_in_millis": 83257
},
"flush": {
"total": 245,
"total_time_in_millis": 3188
},
"warmer": {
"current": 0,
"total": 13792,
"total_time_in_millis": 3157
},
"filter_cache": {
"memory_size_in_bytes": 2688417,
"evictions": 0
},
"id_cache": {
"memory_size_in_bytes": 0
},
"fielddata": {
"memory_size_in_bytes": 2231386,
"evictions": 0
},
"percolate": {
"total": 18387,
"time_in_millis": 20124,
"current": 0,
"memory_size_in_bytes": 36587613,
"memory_size": "34.8mb",
"queries": 5
},
"completion": {
"size_in_bytes": 0
},
"segments": {
"count": 50,
"memory_in_bytes": 3049789
},
"translog": {
"operations": 286,
"size_in_bytes": 0
}
},
"os": {
"timestamp": 1394541121432,
"uptime_in_millis": 12456460,
"load_average": [
0.12,
0.11,
0.13
],
"cpu": {
"sys": 2,
"user": 7,
"idle": 90,
"usage": 9,
"stolen": 0
},
"mem": {
"free_in_bytes": 7124275200,
"used_in_bytes": 9743380480,
"free_percent": 70,
"used_percent": 29,
"actual_free_in_bytes": 11817607168,
"actual_used_in_bytes": 5050048512
},
"swap": {
"used_in_bytes": 0,
"free_in_bytes": 0
}
},
"process": {
"timestamp": 1394541121432,
"open_file_descriptors": 160,
"cpu": {
"percent": 3,
"sys_in_millis": 325200,
"user_in_millis": 2488790,
"total_in_millis": 2813990
},
"mem": {
"resident_in_bytes": 1869119488,
"share_in_bytes": 36052992,
"total_virtual_in_bytes": 2848534528
}
},
"jvm": {
"timestamp": 1394541121432,
"uptime_in_millis": 100005398,
"mem": {
"heap_used_in_bytes": 356969744,
"heap_used_percent": 16,
"heap_committed_in_bytes": 2130051072,
"heap_max_in_bytes": 2130051072,
"non_heap_used_in_bytes": 52893472,
"non_heap_committed_in_bytes": 80510976,
"pools": {
"young": {
"used_in_bytes": 17767016,
"max_in_bytes": 139591680,
"peak_used_in_bytes": 139591680,
"peak_max_in_bytes": 139591680
},
"survivor": {
"used_in_bytes": 2333592,
"max_in_bytes": 17432576,
"peak_used_in_bytes": 17432576,
"peak_max_in_bytes": 17432576
},
"old": {
"used_in_bytes": 336869136,
"max_in_bytes": 1973026816,
"peak_used_in_bytes": 1480030232,
"peak_max_in_bytes": 1973026816
}
}
},
"threads": {
"count": 50,
"peak_count": 56
},
"gc": {
"collectors": {
"young": {
"collection_count": 6166,
"collection_time_in_millis": 87399
},
"old": {
"collection_count": 2,
"collection_time_in_millis": 220
}
}
},
"buffer_pools": {
"direct": {
"count": 31,
"used_in_bytes": 8299659,
"total_capacity_in_bytes": 8299659
},
"mapped": {
"count": 100,
"used_in_bytes": 110804161,
"total_capacity_in_bytes": 110804161
}
}
},
"thread_pool": {
"generic": {
"threads": 2,
"queue": 0,
"active": 0,
"rejected": 0,
"largest": 7,
"completed": 16018
},
"index": {
"threads": 2,
"queue": 0,
"active": 0,
"rejected": 0,
"largest": 2,
"completed": 46342
},
"get": {
"threads": 0,
"queue": 0,
"active": 0,
"rejected": 0,
"largest": 0,
"completed": 0
},
"snapshot": {
"threads": 1,
"queue": 0,
"active": 0,
"rejected": 0,
"largest": 1,
"completed": 6074
},
"merge": {
"threads": 1,
"queue": 0,
"active": 0,
"rejected": 0,
"largest": 1,
"completed": 12779
},
"suggest": {
"threads": 0,
"queue": 0,
"active": 0,
"rejected": 0,
"largest": 0,
"completed": 0
},
"bulk": {
"threads": 2,
"queue": 0,
"active": 0,
"rejected": 0,
"largest": 2,
"completed": 6719
},
"optimize": {
"threads": 0,
"queue": 0,
"active": 0,
"rejected": 0,
"largest": 0,
"completed": 0
},
"warmer": {
"threads": 1,
"queue": 0,
"active": 0,
"rejected": 0,
"largest": 1,
"completed": 13792
},
"flush": {
"threads": 1,
"queue": 0,
"active": 0,
"rejected": 0,
"largest": 1,
"completed": 245
},
"search": {
"threads": 6,
"queue": 0,
"active": 0,
"rejected": 0,
"largest": 6,
"completed": 2630669
},
"percolate": {
"threads": 2,
"queue": 0,
"active": 0,
"rejected": 0,
"largest": 2,
"completed": 18387
},
"management": {
"threads": 5,
"queue": 0,
"active": 1,
"rejected": 0,
"largest": 5,
"completed": 18982
},
"refresh": {
"threads": 1,
"queue": 0,
"active": 0,
"rejected": 0,
"largest": 1,
"completed": 12852
}
},
"network": {
"tcp": {
"active_opens": 82615,
"passive_opens": 53213294,
"curr_estab": 56,
"in_segs": 3681090099,
"out_segs": 3892966411,
"retrans_segs": 6630780,
"estab_resets": 68154,
"attempt_fails": 13,
"in_errs": 0,
"out_rsts": 1353371
}
},
"fs": {
"timestamp": 1394541121433,
"total": {
"total_in_bytes": 82566774784,
"free_in_bytes": 77825974272,
"available_in_bytes": 76987146240,
"disk_reads": 123047,
"disk_writes": 29374819,
"disk_io_op": 29497866,
"disk_read_size_in_bytes": 7861359616,
"disk_write_size_in_bytes": 652959784960,
"disk_io_size_in_bytes": 660821144576,
"disk_queue": "0",
"disk_service_time": "0"
},
"data": [
{
"path": "/home/elasticsearch/nodes/0",
"mount": "/",
"dev": "/dev/vda2",
"total_in_bytes": 82566774784,
"free_in_bytes": 77825974272,
"available_in_bytes": 76987146240,
"disk_reads": 123047,
"disk_writes": 29374819,
"disk_io_op": 29497866,
"disk_read_size_in_bytes": 7861359616,
"disk_write_size_in_bytes": 652959784960,
"disk_io_size_in_bytes": 660821144576,
"disk_queue": "0",
"disk_service_time": "0"
}
]
},
"transport": {
"server_open": 26,
"rx_count": 4449489,
"rx_size_in_bytes": 728136464,
"tx_count": 4449864,
"tx_size_in_bytes": 1509918122
},
"http": {
"current_open": 1,
"total_opened": 7569
},
"fielddata_breaker": {
"maximum_size_in_bytes": 1704040857,
"maximum_size": "1.5gb",
"estimated_size_in_bytes": 935746577,
"estimated_size": "892.3mb",
"overhead": 1.03
}
}
}
}

Le mardi 11 mars 2014 12:02:12 UTC+1, Lee Hinman a écrit :

Hi Dunaeth,

Can you attach the output of curl 'localhost:9200/_nodes/stats?all', I
would like to compare the current breaker estimation and the actual field
data usage (they should be the same if no field data loading is currently
happening).

;; Lee

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/4aca25cd-2903-4f19-8cf9-09820d73017c%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Hi,

Latest graphs from this issue, hope it could help :

https://lh3.googleusercontent.com/-myz7cMtQXlY/UyA22xjbWYI/AAAAAAAAAHk/sKAqPDdbR3Y/s1600/elasticsearch_breaker-week.png

https://lh4.googleusercontent.com/-iAyUV0CwzcQ/UyA26KVn9HI/AAAAAAAAAHs/t3909UFLO3M/s1600/elasticsearch2_breaker-week.png

Le mardi 11 mars 2014 11:39:47 UTC+1, Dunaeth a écrit :

We have 5 percolator queries registered in our tester index and we
percolate each stat document before insertion in time based indices.
The fielddata size comes from
nodes.node.indices.fielddata.memory_size_in_bytes field from
/_nodes/stats endpoint.

Le mardi 11 mars 2014 11:32:42 UTC+1, Martijn v Groningen a écrit :

The percolator can take up a big part of your jvm heap. How much
percolator queries are loaded (can be seen via cluster stats api)?
It is weird that the fielddata size stays line stays at the bottom, via
which api do you fetch the field data size stat?

On 11 March 2014 15:21, Dunaeth lomig...@gmail.com wrote:

Here are the requested metric graphs on our two nodes :

https://lh5.googleusercontent.com/-kdSq9BgxqGs/Ux7Gz28b_aI/AAAAAAAAAHM/cggi3HIp5us/s1600/elasticsearch_breaker-day.png

https://lh6.googleusercontent.com/-k4Of7rvaP0I/Ux7G4BAvKeI/AAAAAAAAAHU/zUVdNuonv0A/s1600/elasticsearch2_breaker-day.png

Things that could help to identify the issue is that we didn't had any
trouble with only one index and that it started when we had a statistic
data structure using logstash with month indices. We use these indices to
store data and perform many percolate queries on a specific tester index
(no data, just percolation queries).

Le lundi 10 mars 2014 17:24:27 UTC+1, Dunaeth a écrit :

From now, I'd say the field data size is quite flat whereas the jvm
heap used grows as fielddata_breaker.estimated_size_in_bytes grows.
I'll post graphs when they'll be relevant.
If it's a jvm heap used issue, could it be due to some kind of caching
issue (though the filter_cache seems small on each shard) ?

Le lundi 10 mars 2014 09:43:56 UTC+1, Dunaeth a écrit :

I'm asking our hoster to monitor these metrics and to avoid any
confusion, the breaker indice size actually monitor the
fielddata_breaker.estimated_size_in_bytes from the /_nodes/stats
endpoint. Thanks for following this thread :slight_smile:

Le lundi 10 mars 2014 09:34:15 UTC+1, Martijn v Groningen a écrit :

Yes, the breaker indices size does grow quickly. Can you share the
same graphs for jvm heap used and field data size?

On 10 March 2014 15:16, Dunaeth lomig...@gmail.com wrote:

Hi,

In order to be more precise, here are the graphs of the metrics we
monitor since we've had the fielddata breaker issue :

https://lh4.googleusercontent.com/-esvzNzxQefM/Ux1ziG-KMoI/AAAAAAAAAGs/xH05cbTpIz0/s1600/elasticsearch_breaker-week.png

https://lh4.googleusercontent.com/-lJ1ib0PGrj8/Ux1zlcrVUcI/AAAAAAAAAG0/HvQpdTC-z_k/s1600/elasticsearch_docs-week.png

https://lh3.googleusercontent.com/-M_q2MnRllxE/Ux1zniFzMzI/AAAAAAAAAG8/QQk7PnPc6-c/s1600/elasticsearch_index_size-week.png

As one can see, the indices grow kind of linearly with a size which
remains relatively small when the fielddata breaker estimated size grows
exponentially.

Le jeudi 6 mars 2014 14:49:04 UTC+1, Dunaeth a écrit :

At the moment, we have a whole index size of less than 100MB (less
than 200MB with backuped data) and the estimated_size is 1.4GB... How are
we supposed to deal we that kind of trouble ?

Le mardi 4 mars 2014 06:50:56 UTC+1, Dunaeth a écrit :

Isn't it a bit weird that we reached a 800MB limit and
shortcircuited the data processing when our whole indices size is only
140MB (half this size actually since it includes a backup node) ?

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it,
send an email to elasticsearc...@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/73526a94-
b16b-48b6-9a27-526b194b145f%40googlegroups.comhttps://groups.google.com/d/msgid/elasticsearch/73526a94-b16b-48b6-9a27-526b194b145f%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
Met vriendelijke groet,

Martijn van Groningen

--
Met vriendelijke groet,

Martijn van Groningen

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/6f5ea742-9075-4f6a-8a1c-1adfdb532ff4%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

From the latest graphs I have, I'd say the fielddata breaker issue is
directly corelated with data inserts :

https://lh3.googleusercontent.com/-lEaxxagDO88/UyCJfJQuImI/AAAAAAAAAIY/biXcDmvsFzs/s1600/elasticsearch_breaker-week.pnghttps://lh6.googleusercontent.com/-ML8sEV0UDWk/UyCJjmybRqI/AAAAAAAAAIg/iSpjv7HCURQ/s1600/elasticsearch2_breaker-week.png

https://lh6.googleusercontent.com/-9-hfjWPIaBc/UyCJnA8BsPI/AAAAAAAAAIo/ZabS-W25k48/s1600/elasticsearch_docs-week.pnghttps://lh4.googleusercontent.com/-MrNqYSrD4qo/UyCJpoqhYKI/AAAAAAAAAIw/WSrZopKaqLw/s1600/elasticsearch2_docs-week.png

https://lh3.googleusercontent.com/-Ndaw8zN8mxk/UyCJsybbIuI/AAAAAAAAAI4/lv1WaVhgmPU/s1600/elasticsearch_index_size-week.pnghttps://lh3.googleusercontent.com/-O1FxEiH96yo/UyCJvKzK7lI/AAAAAAAAAJA/NGlmTS4v3HA/s1600/elasticsearch2_index_size-week.png
I hope there's a workaround for this fielddata breaker issue.

Le lundi 3 mars 2014 12:51:39 UTC+1, Dunaeth a écrit :

Hi,

I'd like to know how the fielddata breaker settings are used in a cluster.
We had a single index in production without any issue but when we added
some new indices, we started to have issues on the old index and some other
indices related to fielddata breaker settings :

SearchPhaseExecutionException[Failed to execute phase [query], all shards
failed; shardFailures {[EQ-GzbelTkqfguEfbElNLA][log-2014-03][0]:
ElasticsearchException[org.elasticsearch.common.breaker.CircuitBreakingException:
Data too large, data would be larger than limit of [845047398] bytes];
nested:
UncheckedExecutionException[org.elasticsearch.common.breaker.CircuitBreakingException:
Data too large, data would be larger than limit of [845047398] bytes];
nested: CircuitBreakingException[Data too large, data would be larger than
limit of [845047398] bytes]; }{[EQ-GzbelTkqfguEfbElNLA][log-2014-03][1]:
ElasticsearchException[org.elasticsearch.common.breaker.CircuitBreakingException:
Data too large, data would be larger than limit of [845047398] bytes];
nested:
UncheckedExecutionException[org.elasticsearch.common.breaker.CircuitBreakingException:
Data too large, data would be larger than limit of [845047398] bytes];
nested: CircuitBreakingException[Data too large, data would be larger than
limit of [845047398] bytes]; }]

So, I wonder if it's logical that the old index is affected by the
addition of some other indices.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/69a6daa9-0940-43be-93fb-ffbb336e3480%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

I tried to clear all caches to see if it could help but the fielddata breaker estimated size is still skyrocketing...
If it's not cache issue, and it's linked with our data inserts, I can only think about insert process or percolation queries. Any idea ?

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/51f4d2b3-e98e-422d-bf67-76844029c425%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

On 3/13/14, 1:37 AM, Dunaeth wrote:

I tried to clear all caches to see if it could help but the fielddata
breaker estimated size is still skyrocketing... If it's not cache
issue, and it's linked with our data inserts, I can only think about
insert process or percolation queries. Any idea ?

Hi Danaeth,

Can you add the lines:

logger:
indices.fielddata.breaker: TRACE
index.fielddata: TRACE
common.breaker: TRACE

in logging.yml for your elasticsearch configuration and restart the
cluster? This will log information about the breaker estimation and
adjustment. If you can run some queries and attach the logs it would be
helpful in tracking down what's going on.

;; Lee

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/53226CE5.3060208%40gmail.com.
For more options, visit https://groups.google.com/d/optout.

Our nodes are running with these trace settings atm, is there a prefered
way to provide those logs to you ?

Le vendredi 14 mars 2014 03:43:49 UTC+1, Lee Hinman a écrit :

On 3/13/14, 1:37 AM, Dunaeth wrote:

I tried to clear all caches to see if it could help but the fielddata
breaker estimated size is still skyrocketing... If it's not cache
issue, and it's linked with our data inserts, I can only think about
insert process or percolation queries. Any idea ?

Hi Danaeth,

Can you add the lines:

logger:
indices.fielddata.breaker: TRACE
index.fielddata: TRACE
common.breaker: TRACE

in logging.yml for your elasticsearch configuration and restart the
cluster? This will log information about the breaker estimation and
adjustment. If you can run some queries and attach the logs it would be
helpful in tracking down what's going on.

;; Lee

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/fcb17e77-0062-4107-a3c0-d635bc9174dc%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Hi,

Due to the insert and search query frequency, it's nearly impossible to get
logs from specific queries. That said, logs attached are extracts of the
logs since the cluster restart and are most probably generated during
document inserts.

Le vendredi 14 mars 2014 16:34:32 UTC+1, Dunaeth a écrit :

Our nodes are running with these trace settings atm, is there a prefered
way to provide those logs to you ?

Le vendredi 14 mars 2014 03:43:49 UTC+1, Lee Hinman a écrit :

On 3/13/14, 1:37 AM, Dunaeth wrote:

I tried to clear all caches to see if it could help but the fielddata
breaker estimated size is still skyrocketing... If it's not cache
issue, and it's linked with our data inserts, I can only think about
insert process or percolation queries. Any idea ?

Hi Danaeth,

Can you add the lines:

logger:
indices.fielddata.breaker: TRACE
index.fielddata: TRACE
common.breaker: TRACE

in logging.yml for your elasticsearch configuration and restart the
cluster? This will log information about the breaker estimation and
adjustment. If you can run some queries and attach the logs it would be
helpful in tracking down what's going on.

;; Lee

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/58c1e2ef-830f-4fa4-bbca-6d9cf6255cae%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.