Facet terms int overflow

I have over 4 billion documents I'm trying to facet over and I'm hitting a
int overflow with a terms facet. I'm faceting over multiple indexes and
shards, so the overflow only happens when the result is combined.

I tried using the terms_stats, since internally it uses a long, but that is
4x slower and requires either I use a random value field or use a
value_script (I tried value_script:"1") which is even slower (10x)

It looks like the code wouldn't be hard to change to use a long, maybe
still using a int on the wire to not break compatibility and since I won't
have more then max int per shard. Would such a pull request be acceptable?

Or should I look into writing a custom facet plugin?

Thanks,
Andy

How many shards do you have for 4 billion documents? Is the overflow
happening at the shard level (collector) or during the aggregation
(processor)?

I have yet to hit such limits. In reality, facets that large might not
make much sense for a search, but could occur if using ES to calculate
analytics.

--
Ivan

On Sat, Jul 14, 2012 at 4:36 PM, Andy Wick andywick@gmail.com wrote:

I have over 4 billion documents I'm trying to facet over and I'm hitting a
int overflow with a terms facet. I'm faceting over multiple indexes and
shards, so the overflow only happens when the result is combined.

I tried using the terms_stats, since internally it uses a long, but that is
4x slower and requires either I use a random value field or use a
value_script (I tried value_script:"1") which is even slower (10x)

It looks like the code wouldn't be hard to change to use a long, maybe still
using a int on the wire to not break compatibility and since I won't have
more then max int per shard. Would such a pull request be acceptable?

Or should I look into writing a custom facet plugin?

Thanks,
Andy

On Wednesday, July 18, 2012 1:03:59 PM UTC-4, Ivan Brusic wrote:

How many shards do you have for 4 billion documents? Is the overflow
happening at the shard level (collector) or during the aggregation
(processor)?

I'm faceting across 140 shards currently, so no shard has more then ~50
million documents, so it must be happening during the aggregation step.
Which is why I think it might be possible to leave the int on the wire in
writeTo/readFrom but use a long in the aggregation step.

Another compromise that might be easier would be to just not allow counts
greater then max int. For my use case that would be good enough, since I
could just display that the value is HUGE! :slight_smile:

If no one is interested in fixing I can also look into writing a plugin, if
you think that would be more useful.

I have yet to hit such limits. In reality, facets that large might not
make much sense for a search, but could occur if using ES to calculate
analytics.

Yep, I'm using ES as a DB with high write rates and low read rates, where
response time isn't that important. That said the faceting speed for this
field isn't that bad, about 4s for 6 billion docs.

Thanks,
Andy

I submitted a pull request that implements the compromise. It just stops
counts at max int.

So the counts overflow, I see…, I think a better fix is to simply have that as long values. Do you have an issue open for it instead of a pull request?

On Jul 18, 2012, at 9:26 PM, Andy Wick andywick@gmail.com wrote:

On Wednesday, July 18, 2012 1:03:59 PM UTC-4, Ivan Brusic wrote:
How many shards do you have for 4 billion documents? Is the overflow
happening at the shard level (collector) or during the aggregation
(processor)?

I'm faceting across 140 shards currently, so no shard has more then ~50 million documents, so it must be happening during the aggregation step. Which is why I think it might be possible to leave the int on the wire in writeTo/readFrom but use a long in the aggregation step.

Another compromise that might be easier would be to just not allow counts greater then max int. For my use case that would be good enough, since I could just display that the value is HUGE! :slight_smile:

If no one is interested in fixing I can also look into writing a plugin, if you think that would be more useful.

I have yet to hit such limits. In reality, facets that large might not
make much sense for a search, but could occur if using ES to calculate
analytics.

Yep, I'm using ES as a DB with high write rates and low read rates, where response time isn't that important. That said the faceting speed for this field isn't that bad, about 4s for 6 billion docs.

Thanks,
Andy

Since so many folks (including myself) complain about memory usage and
facets I didn't want to switch from int to long everywhere. I can open an
issue.

Thanks,
Andy

That part can be done without problems in terms of memory usage. And work on improving memory usage is high on the feature list! :slight_smile:

On Aug 7, 2012, at 11:55 PM, Andy Wick andywick@gmail.com wrote:

Since so many folks (including myself) complain about memory usage and facets I didn't want to switch from int to long everywhere. I can open an issue.

Thanks,
Andy

Hi Andy,

I am interested in the setup that allows you to handle a total volume of 4
or 6 billion documents.
Could you briefly describe the setup ?

Thanks in advance,
Wouter

On Sunday, July 15, 2012 1:36:53 AM UTC+2, Andy Wick wrote:

I have over 4 billion documents I'm trying to facet over and I'm hitting a
int overflow with a terms facet. I'm faceting over multiple indexes and
shards, so the overflow only happens when the result is combined.

I tried using the terms_stats, since internally it uses a long, but that
is 4x slower and requires either I use a random value field or use a
value_script (I tried value_script:"1") which is even slower (10x)

It looks like the code wouldn't be hard to change to use a long, maybe
still using a int on the wire to not break compatibility and since I won't
have more then max int per shard. Would such a pull request be acceptable?

Or should I look into writing a custom facet plugin?

Thanks,
Andy

Sure, this is a custom app but our work load is similar to
logstash/graylog/... so 99.99% bulk http writes and multi second query
times are ok. We have 15x64G machines (HP DL380-G7) with 2TB each. We run
2 nodes on each with 22G each assigned to elasticsearch and the rest for
disk cache. We've disabled swap. We have daily indexes of 30 shards each
with replication turned off and are storing 10 days worth of data,
currently around 10 billion documents. Memory is very stable at around 15G
and I think we could get up to around 15B documents. We have to be very
careful with which facet queries we allow from our interface otherwise it
just blows things up.

We've turned replication off since the data isn't THAT important, plus
we've noticed with another cluster that replication of big shards still has
some issues. (We are trying to reproduce still.) If we do loose a shard
we just copy a empty version of that shard on to a node and restart that
single node and elasticsearch works fine. We just have a hole in our data.

Thanks,
Andy

Thanks Andy.
This information is very helpful.
Wouter