Scan through result in ES-DSL aggregation

I wanted to see how many unique link that a user has posted for every user and save something likeuser_id, number_of_post to a csv. Here is what I have come up so far

s.aggs.bucket('user_term', A('terms', field='user__id')).metric('url_count', A('value_count', field='link'))

However, I have yet found a way to iterate through that result. Is there a way for this?

I think that code is just building the query, do you have following code something like this:

response = s.execute()
print('Total %d hits found.' % response.hits.total)
for h in response:
    print(h.title, h.body)

Thank you for your answer! I now have something like

s.aggs.bucket('users', 'terms', field='user.id').metric('url_count', 'cardinality', field='link')

r = s.execute()

for user in r.aggregations.users.buckets:
    print(f'User {user.key} posted {user.url_count.value} links')

but theyonly return 10 results. If I change it to s=s[:], I would have a TransportError(502). Stackoverflow seems not haveing anything promising

Add the size parameter, I think 10 is the default

There is a limit to 10,000 buckets in later versions, so 10K for a simple aggregation, or a total of 10K for nested. They add up quick too...

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.