I have an index where each doc contains all user's-weekly events.
each event has messages.txt , which contain whole line strings, some very long (mapping bellow).
The following query takes 15 seconds to return, the "sampler" aggregation does not help.
Is there a way to limit the number of documents which are sent into the agg?
What is the query? How many docs in the index? How many nodes?
Isn't the top_hits aggregation on it's own more appropriate than the sampler/terms agg combo you're using here?
I have around 200k docs, 5 shards.
Each doc contains an array with up to 10k strings (weekly events of this user)+ some stats (about 2 MB). the @timestamp is the beginning of the week time window.
The query is different for every search, the query along perform very fast.
So you suggest that I should use the 'top_hits' to return the newest using the @timestamp fields?
Could you please show how to use the top_hits agg as a parent agg for the "event_buckets" agg in my example above?
My documents contain a @timestamp field, i could sort by that, but ideally would pick just random top
What I don't like about large text fields as keywords is the index overheads and the arbitrary loss of data for those strings exceeding your ignore_above setting.
It's hard to know what solution to suggest without a full grasp of what business question you're trying to answer
I agree that long strings are an issue. I think the top_hits may help, but i need to clarify the syntax.
my use case is as follows:
event lines from log files are bucketed using their common string prefix.
for each user (_doc) i'm storing an array these event-id's, and an array of full events.
The _doc contain all the events during the passed week.
Event ID's are used for significant-terms aggs ("find unusual events for a subset of all users"),which is working very well.
What i'm trying to achieve is, for a given event-id, return the top 5 occurrences of the full message.
So for event ID "CURL Failed with err code:"
I would get : ["CURL Failed with err code:404", "CURL Failed with err code:123"...]
The following query does the job, only that it takes too long.
Is it possible to use top_hits to limit the number of docs which are sent to the second agg ?
I expect a more useful strategy might be to avoid aggregations based on keyword fields with large strings and instead use hashed versions of these strings. Obviously users will not be able to understand these values so you'd have to issue a second query to get the related full-text but it does mean you'd be dealing with shorter strings
Our current implementation is working very well, we now only wish to tune this feature. is it possible to use the output of top_hits as an input to the next aggregation in the pipeline?
And yes, I do consider to use hashes and store the long strings in another index, but in our use case it's not trivial:
200k _docs (users) each _doc has 1k of eventIDs's and 10k of distinct messages. eventID is a keyword and is always the prefix of each full message.
eventID used to query: find unusual eventID for users having eventId=X
messages used to make match_phrase queries. messages.keyword are used to aggregate distinct messages.
I tried nested aggs, but we reached the 10000 nested objects limit...it was also very very slow
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.